linux企业实战----rhcs


1. 准备工作

准备两台虚拟机

配置yum源

[root@server1 ~]# cat /etc/yum.repos.d/yum.repo 
[rhel6.5]
name=rhel6.5
baseurl=http://172.25.60.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]
name=HighAvailability
gpgcheck=0
baseurl=http://172.25.60.250/rhel6.5/HighAvailability

[LoadBalancer]
name=LoadBalancer
gpgcheck=0
baseurl=http://172.25.60.250/rhel6.5/LoadBalancer

[ResilientStorage]
name=ResilientStorage
gpgcheck=0
baseurl=http://172.25.60.250/rhel6.5/ResilientStorage

[root@server2 yum.repos.d]# cat yum.repo 
[rhel6.5]
name=rhel6.5
baseurl=http://172.25.60.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]
name=HighAvailability
gpgcheck=0
baseurl=http://172.25.60.250/rhel6.5/HighAvailability

[LoadBalancer]
name=LoadBalancer
gpgcheck=0
baseurl=http://172.25.60.250/rhel6.5/LoadBalancer

[ResilientStorage]
name=ResilientStorage
gpgcheck=0
baseurl=http://172.25.60.250/rhel6.5/ResilientStorage

安装服务

[root@server1 ~]# yum install ricci luci -y  # ricci是集群套件 luci是web管理界面
[root@server2 ~]# yum install ricci -y

产生ricci用户

[root@server1 ~]# id ricci
uid=140(ricci) gid=140(ricci) groups=140(ricci)

[root@server2 ~]# id ricci
uid=140(ricci) gid=140(ricci) groups=140(ricci)

给ricci密码

[root@server1 ~]# passwd ricci
Changing password for user ricci.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.

[root@server2 ~]# passwd ricci
Changing password for user ricci.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.

启动服务

[root@server1 ~]# /etc/init.d/ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@server1 ~]# /etc/init.d/luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `server1' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
	(none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Start luci...                                              [  OK  ]
Point your web browser to https://server1:8084 (or equivalent) to access luci

[root@server1 ~]# chkconfig ricci on   # 开机自启
[root@server1 ~]# chkconfig luci on

[root@server2 ~]# /etc/init.d/ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@server2 ~]# chkconfig ricci on

在网页显示集群服务:登陆的是有luci的主机用户
在这里插入图片描述

2. 将server1和server2组成高可用集群

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
页面集群的配置文件,点击页面中的配置,会自动写入配置文件中

[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cluster.conf  cman-notify.d
[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="1" name="zjy">
	<clusternodes>
		<clusternode name="server1" nodeid="1"/>
		<clusternode name="server2" nodeid="2"/>
	</clusternodes>
	<cman expected_votes="1" two_node="1"/>
	<fencedevices/>
	<rm/>
</cluster>

[root@server2 ~]# cd /etc/cluster/
[root@server2 cluster]# ls
cluster.conf  cman-notify.d
[root@server2 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="1" name="zjy">
	<clusternodes>
		<clusternode name="server1" nodeid="1"/>
		<clusternode name="server2" nodeid="2"/>
	</clusternodes>
	<cman expected_votes="1" two_node="1"/>
	<fencedevices/>
	<rm/>
</cluster>

查看集群状态

[root@server1 cluster]# clustat 
Cluster Status for zjy @ Sat Feb 22 18:46:12 2020
Member Status: Quorate

 Member Name                         ID   Status
 ------ ----                         ---- ------
 server1                                 1 Online, Local
 server2                                 2 Online

[root@server2 cluster]# clustat 
Cluster Status for zjy @ Sat Feb 22 18:46:52 2020
Member Status: Quorate

 Member Name                         ID   Status
 ------ ----                         ---- ------
 server1                                 1 Online
 server2                                 2 Online, Local

3. fence

在真机中安装fence

[root@foundation60 ~]# yum search fence
fence-virtd.x86_64 : Daemon which handles requests from fence-virt
fence-virtd-libvirt.x86_64 : Libvirt backend for fence-virtd
fence-virtd-multicast.x86_64 : Multicast listener for fence-virtd
[root@foundation60 ~]# yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 -y

[root@foundation60 ~]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]: 

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]: 

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]: 

Using ipv4 as family.

Multicast IP Port [1229]: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]: 

Configuration complete.

=== Begin Configuration ===
backends {
	libvirt {
		uri = "qemu:///system";
	}

}

listeners {
	multicast {
		port = "1229";
		family = "ipv4";
		interface = "br0";
		address = "225.0.0.12";
		key_file = "/etc/cluster/fence_xvm.key";
	}

}

fence_virtd {
	module_path = "/usr/lib64/fence-virt";
	backend = "libvirt";
	listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@foundation60 ~]# cat /etc/fence_virt.conf
backends {
	libvirt {
		uri = "qemu:///system";
	}

}

listeners {
	multicast {
		port = "1229";
		family = "ipv4";
		interface = "br0";
		address = "225.0.0.12";
		key_file = "/etc/cluster/fence_xvm.key";
	}

}

fence_virtd {
	module_path = "/usr/lib64/fence-virt";
	backend = "libvirt";
	listener = "multicast";
}

创建/etc/cluster目录
制作key

[root@foundation60 etc]# systemctl start fence_virtd.service 
[root@foundation60 etc]# mkdir cluster
[root@foundation60 etc]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000119929 s, 1.1 MB/s

发给两个集群节点

[root@foundation60 cluster]# scp fence_xvm.key root@172.25.60.1:/etc/cluster/
root@172.25.60.1's password: 
fence_xvm.key                         100%  128     0.1KB/s   00:00    
[root@foundation60 cluster]# scp fence_xvm.key root@172.25.60.2:/etc/cluster/
root@172.25.60.2's password: 
fence_xvm.key                         100%  128     0.1KB/s   00:00 

开启fence

[root@foundation60 etc]# systemctl status fence_virtd.service 
 fence_virtd.service - Fence-Virt system host daemon
   Loaded: loaded (/usr/lib/systemd/system/fence_virtd.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2020-02-22 19:56:17 CST; 11s ago
  Process: 8240 ExecStart=/usr/sbin/fence_virtd $FENCE_VIRTD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 8245 (fence_virtd)
   CGroup: /system.slice/fence_virtd.service
           └─8245 /usr/sbin/fence_virtd -w

Feb 22 19:56:16 foundation60.ilt.example.com systemd[1]: Starting Fen...
Feb 22 19:56:17 foundation60.ilt.example.com fence_virtd[8245]: fence...
Feb 22 19:56:17 foundation60.ilt.example.com systemd[1]: Started Fenc...
Hint: Some lines were ellipsized, use -l to show in full.

检查server1和server2上是否有fence的文件

[root@server1 cluster]# ls
cluster.conf  cman-notify.d  fence_xvm.key
[root@server2 cluster]# ls
cluster.conf  cman-notify.d  fence_xvm.key

在页面添加fence
在这里插入图片描述
给每个节点添加fence
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
真机上重起fence设备

[root@foundation60 etc]# systemctl restart fence_virtd.service 

查看fence是否添加成功

# server1添加成功
[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="6" name="zjy">
	<clusternodes>
		<clusternode name="server1" nodeid="1">
			<fence>
				<method name="vmfence1">
					<device domain="305f57c5-a39c-4379-95f6-5e7bf46c75fa" name="vmfence"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="server2" nodeid="2">
			<fence>
				<method name="vmfence2">
					<device domain="7505e597-de97-4f09-b975-2eaed2cb0fc1" name="vmfence"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<cman expected_votes="1" two_node="1"/>
	<fencedevices>
		<fencedevice agent="fence_xvm" name="vmfence"/>
	</fencedevices>
</cluster>

# server2添加成功
[root@server2 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="6" name="zjy">
	<clusternodes>
		<clusternode name="server1" nodeid="1">
			<fence>
				<method name="vmfence1">
					<device domain="305f57c5-a39c-4379-95f6-5e7bf46c75fa" name="vmfence"/>
				</method>
			</fence>
		</clusternode>
		<clusternode name="server2" nodeid="2">
			<fence>
				<method name="vmfence2">
					<device domain="7505e597-de97-4f09-b975-2eaed2cb0fc1" name="vmfence"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<cman expected_votes="1" two_node="1"/>
	<fencedevices>
		<fencedevice agent="fence_xvm" name="vmfence"/>
	</fencedevices>
</cluster>

测试:

[root@server1 cluster]# fence_node server2
fence server2 success

[root@server2 ~]# fence_node server1
fence server1 success

failover domains设置
在这里插入图片描述
priority 数字越小优先级越高

resources

[root@server1 ~]# yum install httpd -y
[root@server2 ~]# yum install httpd -y

[root@server1 ~]# cat /var/www/html/index.html
server1
[root@server2 ~]# cat /var/www/html/index.html
server2

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

[root@server1 ~]# clustat
Cluster Status for zjy @ Sat Feb 22 22:18:48 2020
Member Status: Quorate

 Member Name                         ID   Status
 ------ ----                         ---- ------
 server1                                 1 Online, Local, rgmanager
 server2                                 2 Online, rgmanager

 Service Name               Owner (Last)               State         
 ------- ----               ----- ------               -----         
 service:apache             server1                    started      

[root@server1 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:0f:b6:06 brd ff:ff:ff:ff:ff:ff
    inet 172.25.60.1/24 brd 172.25.60.255 scope global eth0
    inet 172.25.60.100/24 scope global secondary eth0
    inet6 fe80::5054:ff:fe0f:b606/64 scope link 
       valid_lft forever preferred_lft forever

[root@foundation60 cluster]# curl 172.25.60.100
server1

4. 共享存储

三台虚拟机

给server3加一块虚拟存储
在这里插入图片描述

[root@server3 ~]# yum install scsi-* -y

[root@server2 ~]# yum install iscsi-* -y
[root@server1 ~]# yum install iscsi-* -y


[root@server3 ~]# fdisk -l  # 查看我们添加的虚拟内存
Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@server3 ~]# vim /etc/tgt/targets.conf
<target iqn.2020-02.com.example:server.target1>
    backing-store /dev/vda
</target>
[root@server3 ~]# /etc/init.d/tgtd start
Starting SCSI target daemon:                               [  OK  ]
[root@server3 ~]# tgt-admin -s
Target 1: iqn.2020-02.com.example:server.target1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 21475 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vda
            Backing store flags: 
    Account information:
    ACL information:
        ALL
[root@server3 ~]# ps ax  # 有两个tgtd进程
 1062 ?        Ssl    0:00 tgtd
 1065 ?        S      0:00 tgtd
[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.60.3
172.25.60.3:3260,1 iqn.2020-02.com.example:server.target1
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2020-02.com.example:server.target1, portal: 172.25.60.3,3260] (multiple)
Login to [iface: default, target: iqn.2020-02.com.example:server.target1, portal: 172.25.60.3,3260] successful.

[root@server2 ~]# iscsiadm -m discovery -t st -p 172.25.60.3
Starting iscsid:                                           [  OK  ]
172.25.60.3:3260,1 iqn.2020-02.com.example:server.target1
[root@server2 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2020-02.com.example:server.target1, portal: 172.25.60.3,3260] (multiple)
Login to [iface: default, target: iqn.2020-02.com.example:server.target1, portal: 172.25.60.3,3260] successful.

[root@server1 ~]# partprobe
[root@server1 ~]# fdisk -l
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@server2 ~]# partprobe
[root@server2 ~]# fdisk -l
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@server1 ~]# cat /proc/partitions 
major minor  #blocks  name

   8        0    9437184 sda
   8        1     512000 sda1
   8        2    8924160 sda2
 253        0    7979008 dm-0
 253        1     942080 dm-1
   8       16   20971520 sdb
[root@server2 ~]# cat /proc/partitions
major minor  #blocks  name

   8        0    9437184 sda
   8        1     512000 sda1
   8        2    8924160 sda2
 253        0    7979008 dm-0
 253        1     942080 dm-1
   8       16   20971520 sdb

分区:

[root@server1 ~]# fdisk -cu /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xadc97a1a

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   83  Linux
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xadc97a1a

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   8e  Linux LVM
[root@server1 ~]# partprobe 
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
[root@server1 ~]# cat /proc/partitions 
major minor  #blocks  name

   8        0    9437184 sda
   8        1     512000 sda1
   8        2    8924160 sda2
 253        0    7979008 dm-0
 253        1     942080 dm-1
   8       16   20971520 sdb
   8       17   20970496 sdb1

[root@server2 ~]# partprobe 
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
[root@server2 ~]# cat /proc/partitions 
major minor  #blocks  name

   8        0    9437184 sda
   8        1     512000 sda1
   8        2    8924160 sda2
 253        0    7979008 dm-0
 253        1     942080 dm-1
   8       16   20971520 sdb
   8       17   20970496 sdb1
[root@server1 ~]# pvcreate /dev/sdb1
  dev_is_mpath: failed to get device for 8:17
  Physical volume "/dev/sdb1" successfully created
[root@server1 ~]# vgcreate dangdang /dev/sdb1
  Clustered volume group "dangdang" successfully created
[root@server1 ~]# pvs
  PV         VG       Fmt  Attr PSize  PFree 
  /dev/sda2  VolGroup lvm2 a--   8.51g     0 
  /dev/sdb1  dangdang lvm2 a--  20.00g 20.00g
[root@server2 ~]# pvs
  PV         VG       Fmt  Attr PSize  PFree 
  /dev/sda2  VolGroup lvm2 a--   8.51g     0 
  /dev/sdb1  dangdang lvm2 a--  20.00g 20.00g
[root@server1 ~]# vgs
  VG       #PV #LV #SN Attr   VSize  VFree 
  VolGroup   1   2   0 wz--n-  8.51g     0 
  dangdang   1   0   0 wz--nc 20.00g 20.00g
[root@server2 ~]# vgs
  VG       #PV #LV #SN Attr   VSize  VFree 
  VolGroup   1   2   0 wz--n-  8.51g     0 
  dangdang   1   0   0 wz--nc 20.00g 20.00g
[root@server1 ~]# lvcreate -L 4G -n zjy dangdang
  Logical volume "zjy" created
[root@server1 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   7.61g                                             
  lv_swap VolGroup -wi-ao---- 920.00m                                             
  zjy     dangdang -wi-a-----   4.00g  
[root@server2 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   7.61g                                             
  lv_swap VolGroup -wi-ao---- 920.00m                                             
  zjy     dangdang -wi-a-----   4.00g  
[root@server2 ~]# mkfs.ext4 /dev/dangdang/zjy
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
262144 inodes, 1048576 blocks
52428 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@server2 ~]# mount /dev/dangdang/zjy /mnt/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1023892   6430924  14% /
tmpfs                           510200   25656    484544   6% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4128448  139256   3779480   4% /mnt
[root@server1 ~]# mount /dev/dangdang/zjy /mnt/
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1103624   6351192  15% /
tmpfs                           510200   25656    484544   6% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4128448  139256   3779480   4% /mnt

实验过程:(server1 server2)—>谁枪到vip mount /dev/dangdang/zjy /var/www/html–>start httpd
停止apache服务

[root@server2 ~]# clusvcadm -d apache
Local machine disabling service:apache...Success
[root@server2 ~]# clustat
Cluster Status for zjy @ Sat Feb 22 23:19:23 2020
Member Status: Quorate

 Member Name                         ID   Status
 ------ ----                         ---- ------
 server1                                 1 Online, rgmanager
 server2                                 2 Online, Local, rgmanager

 Service Name               Owner (Last)               State         
 ------- ----               ----- ------               -----         
 service:apache             (server1)                  disabled  
[root@server1 ~]# clustat
Cluster Status for zjy @ Sat Feb 22 23:19:32 2020
Member Status: Quorate

 Member Name                         ID   Status
 ------ ----                         ---- ------
 server1                                 1 Online, Local, rgmanager
 server2                                 2 Online, rgmanager

 Service Name               Owner (Last)               State         
 ------- ----               ----- ------               -----         
 service:apache             (server1)                  disabled      

在这里插入图片描述
先挂载再启动http
在这里插入图片描述
测试:

[root@server2 ~]# clustat
Cluster Status for zjy @ Sat Feb 22 23:30:31 2020
Member Status: Quorate

 Member Name                         ID   Status
 ------ ----                         ---- ------
 server1                                 1 Online, rgmanager
 server2                                 2 Online, Local, rgmanager

 Service Name               Owner (Last)               State         
 ------- ----               ----- ------               -----         
 service:apache             server2                    started       
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1023900   6430916  14% /
tmpfs                           510200   25656    484544   6% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4128448  139256   3779480   4% /var/www/html

[root@server2 html]# pwd
/var/www/html
[root@server2 html]# cat index.html 
server3 iscis:index.html
[root@foundation60 images]# curl 172.25.60.100
server3 iscis:index.html

将服务迁移到server1

[root@server2 ~]# clusvcadm -r apache -m server1

[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1103652   6351164  15% /
tmpfs                           510200   25656    484544   6% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4128448  139260   3779476   4% /var/www/html
[root@foundation60 images]# curl 172.25.60.100
server3 iscis:index.html

gfs2实验—依赖于集群

停止apache

[root@server1 ~]# clusvcadm -d apache
Local machine disabling service:apache...Success
[root@server1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t zjy:mygfs2 /dev/dangdang/zjy  # zjy为你的ricci服务的名字
This will destroy any data on /dev/dangdang/zjy.
It appears to contain: symbolic link to `../dm-2'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/dangdang/zjy
Blocksize:                 4096
Device Size                4.00 GB (1048576 blocks)
Filesystem Size:           4.00 GB (1048575 blocks)
Journals:                  2
Resource Groups:           16
Locking Protocol:          "lock_dlm"
Lock Table:                "zjy:mygfs2"
[root@server1 ~]# mount /dev/dangdang/zjy /mnt/
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1103656   6351160  15% /
tmpfs                           510200   31816    478384   7% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4193856  264776   3929080   7% /mnt

在这里插入图片描述

[root@server2 ~]#  mount /dev/dangdang/zjy /mnt/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1021852   6432964  14% /
tmpfs                           510200   31816    478384   7% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4193856  264776   3929080   7% /mnt

测试:
[root@server2 ~]# cp /etc/passwd /mnt/
[root@server1 ~]# ll /mnt
total 4
-rw-r--r-- 1 root root 1247 Feb 22 23:44 passwd
[root@server1 ~]# rm -fr /mnt/passwd 
[root@server2 ~]# ll /mnt/
total 0
[root@server1 ~]# vim /etc/fstab
/dev/dangdang/zjy       /var/www/html           gfs2    _netdev         0 0
[root@server2 ~]# vim /etc/fstab
/dev/dangdang/zjy       /var/www/html           gfs2    _netdev         0 0

[root@server1 ~]# mount -a
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1103656   6351160  15% /
tmpfs                           510200   31816    478384   7% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4193856  264776   3929080   7% /var/www/html
[root@server2 ~]# mount -a
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1021848   6432968  14% /
tmpfs                           510200   31816    478384   7% /dev/shm
/dev/sda1                       495844   33469    436775   8% /boot
/dev/mapper/dangdang-zjy       4193856  264776   3929080   7% /var/www/html

删掉service groups中的filestem
submit
删掉data
在这里插入图片描述
启动apache
测试:

[root@server2 ~]# cd /var/www/html/
[root@server2 html]# vim index.html
[root@server2 html]# cat index.html 
gfs2:index.html
[root@server2 html]# clustat 
Cluster Status for zjy @ Sun Feb 23 00:01:27 2020
Member Status: Quorate

 Member Name                         ID   Status
 ------ ----                         ---- ------
 server1                                 1 Online, rgmanager
 server2                                 2 Online, Local, rgmanager

 Service Name               Owner (Last)               State         
 ------- ----               ----- ------               -----         
 service:apache             server2                    started   
[root@foundation60 images]# curl 172.25.60.100
gfs2:index.html


[root@server1 ~]# gfs2_tool sb /dev/dangdang/zjy all   # gfs2 所有参数信息
  mh_magic = 0x01161970
  mh_type = 1
  mh_format = 100
  sb_fs_format = 1801
  sb_multihost_format = 1900
  sb_bsize = 4096
  sb_bsize_shift = 12
  no_formal_ino = 2
  no_addr = 23
  no_formal_ino = 1
  no_addr = 22
  sb_lockproto = lock_dlm
  sb_locktable = zjy:mygfs2
  uuid = b2a72055-6d0f-379b-5b0d-6b4defcf51b7
[root@server1 ~]# gfs2_tool journals /dev/dangdang/zjy  # 查看日志
journal1 - 128MB
journal0 - 128MB
2 journal(s) found.
[root@server1 ~]# gfs2_jadd -j 3 /dev/dangdang/zju  # 增加挂载日志
[root@server1 ~]# lvextend -L +1G /dev/dangdang/zjy   # 扩大磁盘空间
  Extending logical volume zjy to 5.00 GiB
  Logical volume zjy successfully resized
[root@server1 ~]# gfs2_grow /dev/dangdang/zjy  # 扩大文件系统
FS: Mount Point: /var/www/html
FS: Device:      /dev/dm-2
FS: Size:        1048575 (0xfffff)
FS: RG size:     65533 (0xfffd)
DEV: Size:       1310720 (0x140000)
The file system grew by 1024MB.
gfs2_grow complete.
[root@server1 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   7.61g                                             
  lv_swap VolGroup -wi-ao---- 920.00m                                             
  zjy     dangdang -wi-ao----   5.00g        

[root@server2 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   7.61g                                             
  lv_swap VolGroup -wi-ao---- 920.00m                                             
  zjy     dangdang -wi-ao----   5.00g        
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值