本实验项目参照http://haoxiaoyang.blog.51cto.com的同名博客。所做的实例化练习!!!意在练手及检验博客内容的可操作性。别无他意。可直接访问原创者博客。 拓扑图

 

001

 

  

一.修改两个服务器节点的网络参数

 

  

server1:

 

[root@server1 ~]# vim /etc/sysconfig/network

 

[root@server1 ~]# cat  /etc/sysconfig/network

 

NETWORKING=yes

 

NETWORKING_IPV6=no

 

HOSTNAME=server1.sanzu.com

 

[root@server1 ~]# hostname

 

server1.sanzu.com

 

[root@server1 ~]# vim /etc/hosts

 

[root@server1 ~]# cat  /etc/hosts

 

# Do not remove the following line, or various programs

 

# that require network functionality will fail.

 

127.0.0.1        server1.sanzu.com server1 localhost.localdomain localhost ::1        localhost6.localdomain6 localhost6 192.168.10.1    server1.sanzu.com    server1

 

192.168.10.2    server2.sanzu.com    server2

 

      

server2:

  

[root@server2 ~]# vim /etc/sysconfig/network [root@server2 ~]# cat  /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=server2.sanzu.com [root@server2 ~]# hostname server2.sanzu.com [root@server2 ~]# vim /etc/hosts [root@server2 ~]# cat  /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1        server2.sanzu.com    server2 localhost.localdomain localhost ::1        localhost6.localdomain6 localhost6 192.168.10.1    server1.sanzu.com    server1 192.168.10.2    server2.sanzu.com    server2

 

  

二. 同步群集中各节点的时间

 

server1: [root@server1 ~]# hwclock  -s

 

server2: [root@server2 ~]# hwclock –s

 

  

三:交换节点双方的秘钥实现双方的无密码通讯

 

  

server1: [root@server1 ~]# ssh-keygen -t  rsa     #生成密钥 Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: c6:f1:85:7a:7e:f2:3e:30:4e:3d:17:b7:81:d6:1d:84 root@server1.sanzu.com

 

    

[root@server1 ~]# ssh-copy-id -i .ssh/id_rsa.pub server2  #将生成的密钥copy给server2主机

  

15 The authenticity of host 'server2 (192.168.10.2)' can't be established. RSA key fingerprint is 91:71:d8:d9:f2:63:a6:78:2f:0c:1e:e8:32:aa:55:3c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'server2,192.168.10.2' (RSA) to the list of known hosts. root@server2's password: Now try logging into the machine, with "ssh 'server2'", and check in:   .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting.

    

  server2:

  

[root@server2 ~]# ssh-keygen -t  rsa         #生成密钥              Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 42:50:49:f5:42:43:15:32:bd:99:e5:d3:45:52:6f:97 root@server2.sanzu.com

 

#将生成的密钥copy给server1主机

 

[root@server2 ~]# ssh-copy-id -i .ssh/id_rsa.pub server1   15 The authenticity of host 'server1 (192.168.10.1)' can't be established. RSA key fingerprint is 91:71:d8:d9:f2:63:a6:78:2f:0c:1e:e8:32:aa:55:3c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'server1,192.168.10.1' (RSA) to the list of known hosts. root@server1's password: Now try logging into the machine, with "ssh 'server1'", and check in:   .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting.

 

  

四. 在两个节点上配置yum客户端,进行本地yum安装操作

 

1.在/etc/yum.repos.d/下copy   r*  server.repo 然后编辑内容如下

 

[root@server1 ~]# cat  /etc/yum.repos.d/server.repo [rhel-server] name=Red Hat Enterprise Linux server baseurl=file:///mnt/cdrom/Server enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release [rhel-vt] name=Red Hat Enterprise vt baseurl=file:///mnt/cdrom/VT enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release [rhel-cluster] name=Red Hat Enterprise Linux Cluster baseurl=file:///mnt/cdrom/Cluster enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release [rhel-clusterstorage] name=Red Hat Enterprise Linux clusterstorage baseurl=file:///mnt/cdrom/ClusterStorage enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

  

[root@server2 ~]# cat  /etc/yum.repos.d/server.repo [rhel-server] name=Red Hat Enterprise Linux server baseurl=file:///mnt/cdrom/Server enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release [rhel-vt] name=Red Hat Enterprise vt baseurl=file:///mnt/cdrom/VT enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release [rhel-cluster] name=Red Hat Enterprise Linux Cluster baseurl=file:///mnt/cdrom/Cluster enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release [rhel-clusterstorage] name=Red Hat Enterprise Linux clusterstorage baseurl=file:///mnt/cdrom/ClusterStorage enabled=1 gpgcheck=1 gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

 

六.使用如下命令进行安装

 

[root@server2 ~]# yum localinstall *.rpm  -y  --nogpgcheck

 

[root@server1 ~]# yum localinstall *.rpm  -y  --nogpgcheck

 

七.对各节点进行相应的配置

 

[root@server2 corosync]# cat  corosync.conf

 

# Please read the corosync.conf.5 manual page compatibility: whitetank totem {     version: 2     secauth: off     threads: 0     interface {         ringnumber: 0         bindnetaddr: 192.168.10.0         mcastaddr: 226.94.1.1         mcastport: 5405     } } logging {     fileline: off     to_stderr: no     to_logfile: yes     to_syslog: yes     logfile: /var/log/cluster/corosync.log     debug: off     timestamp: on     logger_subsys {         subsys: AMF         debug: off     } } logger_subsys{     subsys:AMF     debug:off } amf {     mode: disabled } service{     ver:0     name:pacemaker } aisexec{     user:root     group:root }

 

2.创建cluster目录

 

[root@server2 corosync]# mkdir  /var/log/cluster

 

3.生成authkey

 

[root@server2corosync]# corosync-keygen Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy. Writing corosync key to /etc/corosync/authkey.

 

[root@server2 corosync]# ll total 28 -rw-r--r-- 1 root root 5384 Jul 28  2010 amf.conf.example -r-------- 1 root root  128 Feb  7 22:22 authkey -rw-r--r-- 1 root root  545 Feb  7 22:18 corosync.conf -rw-r--r-- 1 root root  436 Jul 28  2010 corosync.conf.example drwxr-xr-x 2 root root 4096 Jul 28  2010 service.d drwxr-xr-x 2 root root 4096 Jul 28  2010 uidgid.d

 

[root@server2 corosync]#

 

4.将server2节点上的authkey与corosync.conf拷贝到servser1下的/etc/corosync/目录下

 

[root@server2 corosync]# scp  -p  authkey corosync.conf  server1:/etc/corosync/ authkey                                       100%  128     0.1KB/s   00:00    corosync.conf                                 100%  545     0.5KB/s   00:00    [root@server2 corosync]# ssh  server1 'mkdir /var/log/cluster'

 

5.在server2节点上面启动corosync的服务

 

[root@server2 corosync]# service  corosync  start Starting Corosync Cluster Engine (corosync):               [  OK  ]

 

6.检查corosync引擎是否正常启动

 

[root@server2 corosync]# grep  -i -e "corosync  cluster engine" -e "configuration file" /var/log/messages Feb  7 19:02:31 zzu smartd[2829]: Opened configuration file /etc/smartd.conf Feb  7 19:02:31 zzu smartd[2829]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices Feb  7 19:44:53 zzu smartd[2816]: Opened configuration file /etc/smartd.conf Feb  7 19:44:53 zzu smartd[2816]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices Feb  7 20:06:16 server2 smartd[2779]: Opened configuration file /etc/smartd.conf Feb  7 20:06:16 server2 smartd[2779]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices Feb  7 22:30:33 server2 corosync[334]:   [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

 

7.查看初始化成员节点通知是否发出

 

[root@server2 corosync]# grep  -i   totem  /var/log/messages Feb  7 22:30:33 server2 corosync[334]:   [TOTEM ] Initializing transport (UDP/IP). Feb  7 22:30:33 server2 corosync[334]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Feb  7 22:30:34 server2 corosync[334]:   [TOTEM ] The network interface [192.168.10.2] is now up. Feb  7 22:30:35 server2 corosync[334]:   [TOTEM ] Process pause detected for 791 ms, flushing membership messages. Feb  7 22:30:35 server2 corosync[334]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.

 

8.检查过程中是否有错误产生

 

[root@server2 corosync]# grep  -i error: /var/log/messages  |grep -v unpack_resources

 

9.检查pacemaker启动的相关信息

 

[root@server2 corosync]# grep  -i pcmk_startup /var/log/messages  Feb  7 22:30:34 server2 corosync[334]:   [pcmk  ] info: pcmk_startup: CRM: Initialized Feb  7 22:30:34 server2 corosync[334]:   [pcmk  ] Logging: Initialized pcmk_startup Feb  7 22:30:34 server2 corosync[334]:   [pcmk  ] info: pcmk_startup: Maximum core file size is: 4294967295 Feb  7 22:30:34 server2 corosync[334]:   [pcmk  ] info: pcmk_startup: Service: 9 Feb  7 22:30:34 server2 corosync[334]:   [pcmk  ] info: pcmk_startup: Local hostname: server2.sanzu.com

 

server2:与server1的步骤一致

 

八.在server2节点查看群集的状态

 

1.

 

[root@server2 corosync]#  crm  status ============ Last updated: Tue Feb  7 22:49:11 2012 Stack: openais Current DC: server2.sanzu.com - partition with quorum Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f 2 Nodes configured, 2 expected votes 0 Resources configured. ============ Online: [ server2.sanzu.com server1.sanzu.com ]

 

2.可以禁用stonith

 

crm(live)configure# property  stonith-enabled=false

 

crm(live)configure# commit

 

3.检查系统有无错误使用如下命令

 

[root@server2 ~]# crm_verify –L

 

4.查看群集的资源类型

 

crm(live)ra# classes heartbeat lsb ocf / heartbeat pacemaker stonith crm(live)ra#

 

九.配置集群的资源

 

1.资源ip

 

crm(live)configure#primitive webip ocf:heartbeat:IPaddr  params ip=192.168.10.100

 

crm(live)configure#primitive webIP ocf:heartbeat:IPaddr  params ip=192.168.10.100

 

2.查看配置

 

crm(live)configure# show

 

node server1.sanzu.com node server2.sanzu.com primitive webIP ocf:heartbeat:IPaddr \     params ip="192.168.10.100" primitive webip ocf:heartbeat:IPaddr \     params ip="192.168.10.100" property $id="cib-bootstrap-options" \     dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \     cluster-infrastructure="openais" \     expected-quorum-votes="2" \     stonith-enabled="false"

 

3.提交

 

crm(live)configure# commit

 

4.状态

 

crm(live)# status ============ Last updated: Tue Feb  7 23:53:29 2012 Stack: openais Current DC: server2.sanzu.com - partition with quorum Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ server2.sanzu.com server1.sanzu.com ] webip    (ocf::heartbeat:IPaddr):    Started server1.sanzu.com

 

5.实用ifconfig在节点server2上进行查看

 

[root@server2 ~]# ifconfig eth0      Link encap:Ethernet  HWaddr 00:0C:29:FC:97:15            inet addr:192.168.10.2  Bcast:192.168.10.255  Mask:255.255.255.0           inet6 addr: fe80::20c:29ff:fefc:9715/64 Scope:Link           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1           RX packets:1028487 errors:0 dropped:0 overruns:0 frame:0           TX packets:1086537 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000           RX bytes:151067750 (144.0 MiB)  TX bytes:203621600 (194.1 MiB)           Interrupt:67 Base address:0x2000 eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:FC:97:15            inet addr:192.168.10.100  Bcast:192.168.10.255  Mask:255.255.255.0           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1           Interrupt:67 Base address:0x2000 lo        Link encap:Local Loopback            inet addr:127.0.0.1  Mask:255.0.0.0           inet6 addr: ::1/128 Scope:Host           UP LOOPBACK RUNNING  MTU:16436  Metric:1           RX packets:243085 errors:0 dropped:0 overruns:0 frame:0           TX packets:243085 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:0           RX bytes:27682005 (26.3 MiB)  TX bytes:27682005 (26.3 MiB)

 

6.定义web服务资源在两个节点上都要进行安装安装完毕后,可以查看httpd 的lsb脚本

 

[root@server2 ~]# yum install -y httpd [root@server2 ~]# chkconfig httpd off [root@server2 ~]# yum install -y httpd [root@server2 ~]# chkconfig httpd off [root@server2 ~]# crm ra list lsb

 

查看http的参数

 

crm(live)ra# meta  lsb:httpd lsb:httpd Apache is a World Wide Web server.  It is used to serve \             HTML files and CGI. Operations' defaults (advisory minimum):     start         timeout=15     stop          timeout=15     status        timeout=15     restart       timeout=15     force-reload  timeout=15     monitor       interval=15 timeout=15 start-delay=15

 

7.定义httpd的资源

 

crm(live)configure# primitive webserver  lsb:httpd crm(live)configure# show node server1.sanzu.com node server2.sanzu.com primitive webIP ocf:heartbeat:IPaddr \     params ip="192.168.10.100" primitive webip ocf:heartbeat:IPaddr \     params ip="192.168.10.100" primitive webserver lsb:httpd property $id="cib-bootstrap-options" \     dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \     cluster-infrastructure="openais" \     expected-quorum-votes="2" \     stonith-enabled="false"

 

使用cd命令切换

 

crm(live)# status ============ Last updated: Wed Feb  8 00:14:17 2012 Stack: openais Current DC: server2.sanzu.com - partition with quorum Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f 2 Nodes configured, 2 expected votes 3 Resources configured. ============ Online: [ server2.sanzu.com server1.sanzu.com ] webIP    (ocf::heartbeat:IPaddr) Started [    server2.sanzu.com     server1.sanzu.com ] webserver    (lsb:httpd):    Started server2.sanzu.com Failed actions:     webip_start_0 (node=server2.sanzu.com, call=15958, rc=1, status=complete): unknown error crm(live)#

 

8.定义web组

 

crm(live)configure# show node server1.sanzu.com node server2.sanzu.com primitive webIP ocf:heartbeat:IPaddr \     params ip="192.168.10.100" primitive webip ocf:heartbeat:IPaddr \     params ip="192.168.10.100" primitive webserver lsb:httpd group web webip webserver property $id="cib-bootstrap-options" \     dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \     cluster-infrastructure="openais" \     expected-quorum-votes="2" \     stonith-enabled="false"

 

crm(live)configure#

 

crm(live)# status

 

============ Last updated: Wed Feb  8 00:18:50 2012 Stack: openais Current DC: server2.sanzu.com - partition with quorum Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ server2.sanzu.com server1.sanzu.com ] webIP    (ocf::heartbeat:IPaddr) Started [    server2.sanzu.com     server1.sanzu.com ] Resource Group: web      webip    (ocf::heartbeat:IPaddr):    Stopped      webserver    (lsb:httpd):    Started server2.sanzu.com Failed actions:     webip_start_0 (node=server2.sanzu.com, call=15958, rc=1, status=complete): unknown error crm(live)#

 

十. 测试群集的配置

 

1.在server1上创建内容为“erver1.sanzu.com ”。在server2上创建“erver2.sanzu.com”

 

2.使用“http://192.168.10.100”进行访问

 

002

 

2.将节点server2的corosync服务停止

 

[root@server2 ~]# service  corosync  stop

 

这时节点1(server1)不生效

 

3.关闭quorum

 

crm(live)configure# property no-quorum-policy=ignore

 

crm(live)configure# commit

 

4.再使用“http://192.168.10.100”进行访问

 

003

 

成功转到节点2上

 

以上为本人验证的部分。好了。下可以上趟厕所了。呵呵。

 

恩恩,韩宇说得对!