集群之RHCS

RHCS(红帽集群套件)
目标:利用Luci/Ricci实现web集群
准备:集群节点1 —>172.25.30.1(server1)
集群节点2 —>172.25.30.4(server4)

一 配置

1.配置yum源
vim /etc/yum.repos/rhel-source.repo 
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.31.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]      ##高可用 
name=HighAvailability
baseurl=http://172.25.30.250/rhel6.5/HighAvailability
gpgcheck=0

[LoadBalancer]        ##负载均衡
name=LoadBalancer
baseurl=http://172.25.30.250/rhel6.5/LoadBalancer
gpgcheck=0

[ResilientStorage]     ##弹性存储
name=ResilientStorage
baseurl=http://172.25.30.250/rhel6.5/ResilientStorage
gpgcheck=0

[ScalableFileSystem]   ##可伸缩系统文件
name=ScalableFileSystem
baseurl=http://172.25.31.250/rhel6.5/ScalableFileSystem
gpgcheck=0
2.安装套件
yum install -y ricci
passwd ricci             ##初始化密码
/etc/init.d/ricci  start    ##启动服务
chkconfig ricci on        ##开机自启动
yum install -y luci       
/etc/init.d/luci start      ##启动服务
chkconfig luci on        ##开机自启动
3.配置server4
scp rhel-source.repo 172.25.31.4:/etc/yum.repos.d/
yum install -y ricci
passwd ricci             
/etc/init.d/ricci  start
chkconfig ricci on
4.server1、server4要有DNS解析
5.测试

访问https://172.25.31.1:8084
这里写图片描述
点击Create,写入两个主机名,创建为两个节点主机
这里写图片描述
节点创建完成,在节点1、节点2的/etc/cluster下会生成cluster.conf文件。

二 高可用集群

准备:fence-virtd-libvirt.x86_64、创建fence设备
1.物理机上配置

[root@foundation30 ~]#
yum install -y fence-virtd-libvirt.x86_64 
rpm -qa | grep fence  ##所需要安装的fence包
fence-virtd-multicast-0.3.2-2.el7.x86_64
libxshmfence-1.2-1.el7.x86_64
fence-virtd-0.3.2-2.el7.x86_64
fence-virtd-libvirt-0.3.2-2.el7.x86_64
fence_virtd -c          ##fence的设置
Module search path [/usr/lib64/fence-virt]:   ##模块查询路径

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:   ##多广播模式

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:      ##监听的ip地址,默认

Using ipv4 as family.

Multicast IP Port [1229]:                ##多播的端口

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0             ##选用br0桥接

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:   ##key文件的生成

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:     ##后端模块libvirt

Configuration complete.

=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br0";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y    ##确认
[root@foundation30 ~]# mkdir /etc/cluster/ 
[root@foundation30 ~]# ll -d /etc/cluster/ 
drwxr-xr-x 2 root root 6 Jul 24 10:27 /etc/cluster/
[root@foundation30 ~]# ll /dev/urandom 
crw-rw-rw- 1 root root 1, 9 Jul 24 09:00 /dev/urandom
[root@foundation30 ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key  bs=128  count=1
生成密钥文件
[root@foundation30 ~]# cd /etc/cluster/
[root@foundation30 cluster]# ll
total 4
-rw-r--r-- 1 root root 128 Jul 24 10:28 fence_xvm.key
[root@foundation30 cluster]# systemctl restart fence_virtd
[root@foundation30 cluster]# systemctl status fence_virtd
[root@foundation30 cluster]# netstat -anulp |grep :1229
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           8862/fence_virtd    
[root@foundation30 cluster]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps

scp fence_xvm.key root@172.25.30.1:/etc/cluster/
scp fence_xvm.key root@172.25.31.4:/etc/cluster/

2.查看
server1/server4:
[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cluster.conf cman-notify.d fence_xvm.key

3.创建fence设备

[root@server1 cluster]# cat cluster.conf    ##查看配置是否写入
<?xml version="1.0"?>
<cluster config_version="2" name="pucca">
    <clusternodes>
        <clusternode name="server1" nodeid="1"/>
        <clusternode name="server4" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="vmfence"/>
    </fencedevices>
</cluster>

页面:
Nodes:server1:Add Fence Method:fence1 Add Fence Instance:vmfence,Domain:UUID
提交,cat cluster.conf ##查看配置是否写入

测试:
[root@server1 cluster]# fence_node server4 ##将server4停止,server5断电
[root@server4 ~]# ip link set eth0 down ##网卡down掉,立马断电重启

三 负载均衡
server1:

yum install -y httpd
/etc/init.d/httpd start
vim /var/www/html/index.html
[root@server1 html]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:2f:b1:55 brd ff:ff:ff:ff:ff:ff
    inet 172.25.31.1/24 brd 172.25.31.255 scope global eth0
    inet 172.25.31.100/24 scope global secondary eth0
    inet6 fe80::5054:ff:fe2f:b155/64 scope link 
       valid_lft forever preferred_lft forever
[root@server1 html]# clustat
Cluster Status for pucca @ Mon Jul 24 11:46:06 2017
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server1                                     1 Online, Local, rgmanager
 server4                                     2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server1                        started  

测试:
[root@server1 html]# /etc/init.d/httpd stop
[root@server1 html]# clustat
[root@server4 html]# clustat
Cluster Status for pucca @ Mon Jul 24 11:54:42 2017
Member Status: Quorate

Member Name ID Status
—— —- —- ——
server1 1 Online, rgmanager
server4 2 Online, Local, rgmanager

Service Name Owner (Last) State
——- —- —– —— —–
service:apache server4 started
[root@server4 html]# echo c > /proc/sysrq-trigger

四iscsi文件系统
server2:+8G虚拟磁盘
[root@server2 ~]# yum install -y scsi-*
[root@server2 ~]# vim /etc/tgt/targets.conf

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值