在“配置基本集群环境”的基础上,每个节点均增加一个网卡,为三个节点内部通信使用。
网络拓扑为:
node1.example.com 192.168.43.63
node2.example.com 192.168.43.64
node3.example.com 192.168.43.65
node1.private.example.com 192.168.101.63
node2.private.example.com 192.168.101.64
node3.private.example.com 192.168.101.65
集群节点是运行在带有KVM/libvirt的Red Hat Enterprise Linux主机上的虚拟机,需要配置软件fence设备fence-virtd并在管理程序hypervisor上运行。组播模式下的虚拟机fencing是通过向libvirt fencing组播组发送一个用共享密钥签名的请求。这意味着实际的节点虚拟机可以在不同的管理程序hypervisor机器上运行,只要所有管理程序hypervisor都为同一个多播组配置了fence-virtd,并使用相同的共享密钥。要在运行虚拟机的管理程序hypervisor上设置fence-virtd软件fence设备,需要执行以下步骤:
-
在管理程序hypervisor上安装fence-virtd、fence-virtd-libvirt和fence-virtd-multicast包。这些包分别提供虚拟机fencing守护进程、libvirt集成和多播侦听器。
[root@hypervisor ~]# yum -y install fence-virtd fence-virtd-libvirt fence-virtd-multicast
-
在您的管理程序上创建一个共享密钥/etc/cluster/fence_xvm.key。您可能必须首先创建/etc/cluster目录。
[root@hypervisor ~]# mkdir -p /etc/cluster [root@hypervisor ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=1k count=4
-
在管理程序上配置fence-virtd守护进程。大多数选项都可以使用默认值,但要确保选择libvirt后端和multicast侦听器。
[root@hypervisor ~]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:
Available backends:
libvirt 0.1
Available listeners:
multicast 1.1
Listener modules are responsible for accepting requests
from fencing clients.
Listener module [multicast]:
The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.
The multicast address is the address that a client will use to
send fencing requests to fence_virtd.
Multicast IP Address [225.0.0.12]:
Using ipv4 as family.
Multicast IP Port [1229]:
Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this must be set (typically to virbr0).
Set to ‘none’ for no interface.
Interface [none]: private
The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.
Key File [/etc/cluster/fence_xvm.key]:
Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.
Backend module [checkpoint]: libvirt
The libvirt backend module is designed for single desktops or
servers. Do not use in environments where virtual machines
may be migrated between hosts.
Libvirt URI [qemu:///system]:
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = “qemu:///system”;
}
}
listeners {
multicast {
port = “1229”;
family = “ipv4”;
address = “225.0.0.12”;
key_file = “/etc/cluster/fence_xvm.key”;
interface = “private”;
}
}
fence_virtd {
module_path = “/usr/lib64/fence-virt”;
backend = “libvirt”;
listener = “multicast”;
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
-
在管理程序上启用并启动fence_virtd守护进程。
[root@hypervisor ~]# systemctl enable fence_virtd; systemctl restart fence_virtd
-
将共享密钥/etc/cluster/fence_xvm.key复制到所有集群节点,并保持名称和路径与hypervisor上相同。
[root@hypervisor ~]# for node in node{a…d};do scp /etc/cluster/fence_xvm.key root@$node:/etc/cluster/; done
6.在所有各个节点(包括hypervisor机器)上配置1229端口的防火墙
[root@hypervisor ~]# firewall-cmd --add-port=1229/tcp --add-port=1229/udp --permanent;firewall-cmd --reload
[root@nodeY ~]# firewall-cmd --add-port=1229/tcp --add-port=1229/udp --permanent;firewall-cmd --reload
7.在cluster节点上创建fence_xvm。需要注意的是port、pcmk_host_list参数后边跟的是hostname的结果。(是否可以理解为虚拟机名称有待确认?)
[root@node1 ~]# pcs stonith create fencenode1 fence_xvm port=node1 pcmk_host_list=node1
[root@node1 ~]# pcs stonith create fencenode2 fence_xvm port=node2 pcmk_host_list=node2
[root@node1 ~]# pcs stonith create fencenode3 fence_xvm port=node3 pcmk_host_list=node3
[root@node1 ~]# pcs stonith show
fencenode1(stonith:fence_xvm):Started node1
fencenode2(stonith:fence_xvm):Started node2
fencenode3(stonith:fence_xvm):Started node3
8.验证。
[root@node1 ~]# pcs stonith fence node2
Node: node2 fenced