drbd+corosync+pacemaker

实现mysql服务器的高可用性群集

实验环境:

node1 node1.a.com 192.168.10.10

node2node2.a.com 192.168.10.20

vip:192.168.50.100

实验步骤:

修改群集中各节点的网络参数

node1

[root@node1 ~]# vim /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node2.a.com

[root@node1 ~]# vim /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.10.10 node1.a.com node1

192.168.10.20 node2.a.com node2

[root@node1 ~]# hostname

node1.a.com

同步群集中各节点的时间:

[root@node1 ~]# hwclock -s

node2

[root@node2 ~]# vim /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node2.a.com

[root@node2 ~]# vim /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

192.168.10.10 node1.a.com node1

192.168.10.20 node2.a.com node2

[root@node2 ~]# hostname

node2.a.com

[root@node2 ~]# hwclock -s

在各个节点上面产生密钥实现无密码的通讯

node1

产生一个rsa的非对称加密的私钥对:

[root@node1 ~]# ssh-keygen -t rsa

拷贝到node2节点:

[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2

15

The authenticity of host 'node2 (192.168.10.20)' can't be established.

RSA key fingerprint is f8:b2:a1:a2:27:51:b2:86:c9:d6:c9:74:a4:e3:e5:93.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node2,192.168.10.20' (RSA) to the list of known hosts.

root@node2's password: 123456

Now try logging into the machine, with "ssh 'root@node2'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

node2

[root@node2 ~]# ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

ea:25:f1:47:37:28:26:6a:75:57:1c:46:81:a8:41:6b root@node2.a.com

[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub node1

15

The authenticity of host 'node1 (192.168.10.10)' can't be established.

RSA key fingerprint is f8:b2:a1:a2:27:51:b2:86:c9:d6:c9:74:a4:e3:e5:93.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node1,192.168.10.10' (RSA) to the list of known hosts.

root@node1's password: 123456

Now try logging into the machine, with "ssh 'node1'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

在各个节点上面配置好yum客户端

node1

[root@node1 ~]# vim /etc/yum.repos.d/rhel-debuginfo.repo

[rhel-server]

name=Red Hat Enterprise Linux server

baseurl=file:///mnt/cdrom/Server

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[rhel-cluster]

name=Red Hat Enterprise Linux cluster

baseurl=file:///mnt/cdrom/Cluster

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[rhel-clusterstorage]

name=Red Hat Enterprise Linux clusterstorage

baseurl=file:///mnt/cdrom/ClusterStorage

enabled=1

gpgcheck=1

gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

[root@node1 ~]# scp /etc/yum.repos.d/rhel-debuginfo.repo node2:/etc/yum.repos.d/

rhel-debuginfo.repo 100% 319 0.3KB/s 00:00

将下载好的rpm包上传到linux上的各个节点

[root@node1 ~]# ll

total 162332

drwxr-xr-x 2 root root 4096 Mar 19 23:34 Desktop

-rw------- 1 root root 1330 Feb 9 00:27 anaconda-ks.cfg

-rw-r--r-- 1 root root 271360 May 13 16:41 cluster-glue-1.0.6-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 133254 May 13 16:41 cluster-glue-libs-1.0.6-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 170052 May 13 16:41 corosync-1.2.7-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 158502 May 13 16:41 corosynclib-1.2.7-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 221868 May 13 16:41 drbd83-8.3.8-1.el5.centos.i386.rpm

-rw-r--r-- 1 root root 165591 May 13 16:41 heartbeat-3.0.3-2.3.el5.i386.rpm

-rw-r--r-- 1 root root 289600 May 13 16:41 heartbeat-libs-3.0.3-2.3.el5.i386.rpm

-rw-r--r-- 1 root root 35236 Feb 9 00:27 install.log

-rw-r--r-- 1 root root 3995 Feb 9 00:26 install.log.syslog

-rw-r--r-- 1 root root 125974 May 13 16:41 kmod-drbd83-8.3.8-1.el5.centos.i686.rpm

-rw-r--r-- 1 root root 60458 May 13 16:41 libesmtp-1.0.4-5.el5.i386.rpm

-rw-r--r-- 1 root root 162247449 May 13 16:41 mysql-5.5.15-linux2.6-i686.tar.gz

-rw-r--r-- 1 root root 207085 May 13 16:41 openais-1.1.3-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 94614 May 13 16:41 openaislib-1.1.3-1.6.el5.i386.rpm

-rw-r--r-- 1 root root 796813 May 13 16:41 pacemaker-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 207925 May 13 16:41 pacemaker-cts-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 332026 May 13 16:41 pacemaker-libs-1.1.5-1.1.el5.i386.rpm

-rw-r--r-- 1 root root 32818 May 13 16:41 perl-TimeDate-1.16-5.el5.noarch.rpm

-rw-r--r-- 1 root root 388632 May 13 16:41 resource-agents-1.0.4-1.1.el5.i386.rpm

[root@node1 ~]# yum localinstall *.rpm -y --nogpgcheck

[root@node1 ~]# scp *.rpm node2:/root

[root@node2 ~]# yum localinstall *.rpm -y --nogpgcheck

在各节点上增加一个大小类型都相关的drbd设备(sdb1)

node1

需要添加一块小硬盘:

[root@node1 ~]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.

The number of cylinders for this disk is set to 1044.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1044, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044): +1G

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[root@node1 ~]# partprobe /dev/sdb

[root@node1 ~]# cat /proc/partitions

major minor #blocks name

8 0 20971520 sda

8 1 104391 sda1

8 2 2096482 sda2

8 3 10482412 sda3

8 16 8388608 sdb

8 17 987966 sdb1

node2与node1一样

配置drbd

node1

复制样例配置文件为即将使用的配置文件:

[root@node1 ~]# cp /usr/share/doc/drbd83-8.3.8/drbd.conf /etc/

[root@node1 ~]# cd /etc/drbd.d/

将文件global_common.conf 备份:

[root@node1 drbd.d]# cp global_common.conf global_common.conf.bak

[root@node1 drbd.d]# vim global_common.conf

global {

usage-count no;

}

common {

protocol C;

handlers {

}

startup {

wfc-timeout 120;

degr-wfc-timeout 120;

}

disk {

on-io-error detach;

}

net {

cram-hmac-alg "sha1";

shared-secret "mydrbdlab";

}

syncer {

rate 100M;

}

}

定义mysql的资源:

[root@node1 drbd.d]# vim mysql.res

resource mysql {

on node1.a.com {

device /dev/drbd0;

disk /dev/sdb1;

address 192.168.10.10:7789;

meta-disk internal;

}

on node2.a.com {

device /dev/drbd0;

disk /dev/sdb1;

address 192.168.10.20:7789;

meta-disk internal;

}

}

将以上的drbd.*文件都拷贝到node2上面:

[root@node1 drbd.d]# scp -r /etc/drbd.* node2:/etc/

drbd.conf 100% 133 0.1KB/s 00:00

global_common.conf.bak 100% 1418 1.4KB/s 00:00

global_common.conf 100% 1548 1.5KB/s 00:00

mysql.res 100% 241 0.2KB/s 00:00

node1初始化定义的mysql的资源并启动相应的服务:

[root@node1 drbd.d]# drbdadm create-md mysql

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

[root@node1 drbd.d]# service drbd start

Starting DRBD resources: [ n(mysql) ]..........

***************************************************************

DRBD's startup script waits for the peer node(s) to appear.

- In case this node was already a degraded cluster before the

reboot the timeout is 120 seconds. [degr-wfc-timeout]

- If the peer was available before the reboot the timeout will

expire after 120 seconds. [wfc-timeout]

(These values are for resource 'mysql'; 0 sec -> wait forever)

To abort waiting enter 'yes' [ 25]:

使用drbd-overview命令来查看启动状态:

[root@node1 drbd.d]# drbd-overview

0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r----

node2

[root@node2 ~]# drbdadm create-md mysql

Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.

[root@node2 ~]# service drbd start

[root@node2 drbd.d]# drbd-overview

0:mysql Connected Secondary/Secondary Inconsistent/Inconsistent C r----

从上面的信息中可以看出此时两个节点均处于Secondary状态。于是,我们接下来需要将其中一个节点设置为Primary,这里将node1设置为主节点

[root@node1 drbd.d]# drbdadm -- --overwrite-data-of-peer primary mysql

[root@node1 drbd.d]# drbd-overview

0:mysql SyncSource Primary/Secondary UpToDate/Inconsistent C r----

[=======>............] sync'ed: 40.5% (592824/987896)K delay_probe: 24

[root@node2 ~]# drbd-overview

0:mysql Connected Secondary/Primary UpToDate/UpToDate C r----

[root@node2 ~]# cat /proc/drbd

version: 8.3.8 (api:88/proto:86-94)

GIT-hash: d78846e52224fd00562f7c225bcc25b2d422321d build by mockbuild@builder10.centos.org, 2010-06-04 08:04:16

0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----

ns:0 nr:987896 dw:987896 dr:0 al:0 bm:61 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

查看同步过程:

[root@node1 drbd.d]# watch -n 1 'cat /proc/drbd' (Ctrl+c退出)

创建文件系统(只可以在primary节点上进行):

[root@node1 drbd.d]# mkfs -t ext3 /dev/drbd0 格式化

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

123648 inodes, 246974 blocks

12348 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=255852544

8 block groups

32768 blocks per group, 32768 fragments per group

15456 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376

Writing inode tables: done

Creating journal (4096 blocks): mkdir done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@node1 drbd.d]# mkdir /mysqldata

[root@node1 drbd.d]# mount /dev/drbd0 /mysqldata/

[root@node1 drbd.d]# cd /mysqldata/

[root@node1 mysqldata]# touch f1 f2

[root@node1 mysqldata]# ll /mysqldata/

total 16

-rw-r--r-- 1 root root 0 May 13 18:24 f1

-rw-r--r-- 1 root root 0 May 13 18:24 f2

drwx------ 2 root root 16384 May 13 18:23 lost+found

[root@node1 mysqldata]# cd

[root@node1 ~]# umount /mysqldata/

将node1设置为secondary节点:

[root@node1 ~]# drbdadm secondary mysql

[root@node1 ~]# drbd-overview

0:mysql Connected Secondary/Secondary UpToDate/UpToDate C r----

将node2设置为primary节点:

[root@node2 ~]# drbdadm primary mysql

[root@node2 ~]# drbd-overview

0:mysql Connected Primary/Secondary UpToDate/UpToDate C r----

[root@node2 ~]# mkdir /mysqldata

[root@node2 ~]# mount /dev/drbd0 /mysqldata/

[root@node2 ~]# ll /mysqldata/

total 16

-rw-r--r-- 1 root root 0 May 13 18:24 f1

-rw-r--r-- 1 root root 0 May 13 18:24 f2

drwx------ 2 root root 16384 May 13 18:23 lost+found

[root@node2 ~]# umount /mysqldata/

mysql的安装和配置

添加用户和组:

[root@node1 ~]# groupadd -r mysql

You have new mail in /var/spool/mail/root

[root@node1 ~]# useradd -g mysql -r mysql

由于主设备才能读写,挂载,故我们还要设置node1为主设备,node2为从设备:[root@node2 ~]# drbdadm secondary mysql

You have new mail in /var/spool/mail/root

node1

[root@node1 ~]# drbdadm primary mysql

[root@node1 ~]# drbd-overview

0:mysql Connected Primary/Secondary UpToDate/UpToDate C r----

挂载drbd设备:

[root@node1 ~]# mount /dev/drbd0 /mysqldata/

[root@node1 ~]# mkdir /mysqldata/data

data目录要用存放mysql的数据,故改变其属主属组:

[root@node1 ~]# chown -R mysql.mysql /mysqldata/data/

查看:

[root@node1 ~]# ll /mysqldata/

total 20

drwxr-xr-x 2 mysql mysql 4096 May 13 18:52 data

-rw-r--r--1 root root 0 May 13 18:35 f1

-rw-r--r--1 root root 0 May 13 18:35 f2

drwx------2 root root 16384 May 13 18:34 lost+found

安装mysql

[root@node1 ~]# tarzxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/

[root@node1 ~]# cd /usr/local/

[root@node1 local]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql

create symbolic link `mysql' to `mysql-5.5.15-linux2.6-i686/'

[root@node1 local]# cd mysql

[root@node1 mysql]# chown -R mysql.mysql .

初始化mysql数据库:

[root@node1 mysql]# scripts/mysql_install_db --user=mysql --datadir=/mysqldata/data

[root@node1 mysql]# chown -R root .

为mysql提供主配置文件:

[root@node1 mysql]# cp support-files/my-large.cnf /etc/my.cnf

并修改此文件中thread_concurrency的值为你的CPU个数乘以2,比如这里使用如下行:

[root@node1 mysql]# vim /etc/my.cnf

thread_concurrency = 2

另外还需要添加如下行指定mysql数据文件的存放位置

datadir = /mysqldata/data

为mysql提供sysv服务脚本,使其能使用service命令:

[root@node1 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld

node2上的配置文件,sysv服务脚本和此相同,故直接复制过去:

[root@node1 mysql]# scp /etc/my.cnf node2:/etc/

my.cnf 100% 4691 4.6KB/s 00:00

[root@node1 mysql]# scp /etc/rc.d/init.d/mysqld node2:/etc/rc.d/init.d/

mysqld 100% 10KB 10.4KB/s 00:00

添加至服务列表:

[root@node1 mysql]# chkconfig --add mysqld

确保开机不能自动启动,我们要用CRM控制:

[root@node1 mysql]# chkconfig mysqld off

而后就可以启动服务测试使用了:

[root@node1 mysql]# service mysqld start

Starting MySQL...... [ OK ]

测试之后关闭服务:

[root@node1 mysql]# service mysqld stop

Shutting down MySQL.. [ OK ]

[root@node1 mysql]# ls /mysqldata/data/

ib_logfile0 ib_logfile1 ibdata1 mysql mysql-bin.000001 mysql-bin.index node1.a.com.err performance_schema test

为了使用mysql的安装符合系统使用规范,并将其开发组件导出给系统使用,这里还需要进行如下步骤:

输出mysql的man手册至man命令的查找路径

[root@node1 mysql]# vim /etc/man.config

添加如下行即可

MANPATH /usr/local/mysql/man

输出mysql的头文件至系统头文件路径/usr/include,,这可以通过简单的创建链接实现:

[root@node1 mysql]# ln -sv /usr/local/mysql/include /usr/include/mysql

create symbolic link `/usr/include/mysql' to `/usr/local/mysql/include'

输出mysql的库文件给系统库查找路径:(文件只要是在/etc/ld.so.conf.d/下并且后缀是.conf就可以):

[root@node1 mysql]# echo '/usr/local/mysql/lib/'> /etc/ld.so.conf.d/mysql.conf

让系统重新载入系统库:

[root@node1 mysql]# ldconfig

修改PATH环境变量,让系统所有用户可以直接使用mysql的相关命令:

PATH=$PATH:/usr/local/mysql/bin

重新读取环境变量:

[root@node1 mysql]# vim /etc/profile

[root@node1 mysql]# . /etc/profile

[root@node1 mysql]# echo $PATH

/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin

[root@node1 mysql]# umount /mysqldata/

node2

[root@node2 ~]# groupadd -r mysql

[root@node2 ~]# useradd -g mysql -r mysql

由于主设备才能读写,挂载,故我们还要设置node2为主设备,node1为从设备:

[root@node1 mysql]# drbdadm secondary mysql

[root@node2 ~]# drbdadm primary mysql

[root@node2 ~]# mount /dev/drbd0 /mysqldata/

[root@node2 ~]# ls /mysqldata/

data f1 f2 lost+found

[root@node2 ~]# tar -zxvf mysql-5.5.15-linux2.6-i686.tar.gz -C /usr/local/

[root@node2 ~]# cd /usr/local/

[root@node2 local]# ln -sv mysql-5.5.15-linux2.6-i686/ mysql

create symbolic link `mysql' to `mysql-5.5.15-linux2.6-i686/'

[root@node2 local]# cd mysql

[root@node2 mysql]# chown -R root:mysql .

[root@node2 mysql]# cp support-files/my-large.cnf /etc/my.cnf

cp: overwrite `/etc/my.cnf'? n

[root@node2 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld

cp: overwrite `/etc/rc.d/init.d/mysqld'? n

[root@node2 mysql]# chkconfig --add mysqld

[root@node2 mysql]# chkconfig mysqld off

[root@node2 mysql]# service mysqld start

Starting MySQL...... [ OK ]

[root@node2 mysql]# ls /mysqldata/data/

ib_logfile0 ibdata1 mysql-bin.000001 mysql-bin.index node2.a.com.err performance_schema

ib_logfile1 mysql mysql-bin.000002 node1.a.com.err node2.a.com.pid test

[root@node2 mysql]# service mysqld stop

Shutting down MySQL.. [ OK ]

为了使用mysql的安装符合系统使用规范,并将其开发组件导出给系统使用,

这里还需要进行一些类似node1上的操作,由于方法完全相同,不再阐述!

卸载设备:

[root@node2 mysql]# umount /dev/drbd0

corosync+pacemaker的安装和配置

node1

[root@node1 ~]# cd /etc/corosync/

[root@node1 corosync]# cp corosync.conf.example corosync.conf

[root@node1 corosync]# vim corosync.conf

compatibility: whitetank

totem { //这是用来传递心跳时的相关协议的信息

version: 2

secauth: off

threads: 0

interface {

ringnumber: 0

bindnetaddr: 192.168.2.0 //我们只改动这里就行啦

mcastaddr: 226.94.1.1

mcastport: 5405

}

}

logging {

fileline: off

to_stderr: no //是否发送标准出错

to_logfile: yes //日志

to_syslog: yes //系统日志(建议关掉一个),会降低性能

logfile: /var/log/cluster/corosync.log //需要手动创建目录cluster

debug: off // 排除时可以起来

timestamp: on //日志中是否记录时间

//******以下是openais的东西,可以不用×××*****//

logger_subsys {

subsys: AMF

debug: off

}

}

amf {

mode: disabled

}

//*********补充一些东西,前面只是底层的东西,因为要用pacemaker ******//

service {

ver: 0

name: pacemaker

use_mgmtd: yes

}

//******虽然用不到openais ,但是会用到一些子选项********//

aisexec {

user: root

group: root

}

创建cluster目录:

[root@node1 corosync]# mkdir /var/log/cluster

为了便面其他主机加入该集群,需要认证,生成一authkey:

[root@node1 corosync]# corosync-keygen

Corosync Cluster Engine Authentication key generator.

Gathering 1024 bits for key from /dev/random.

Press keys on your keyboard to generate entropy.

Press keys on your keyboard to generate entropy (bits = 888).

Press keys on your keyboard to generate entropy (bits = 952).

Press keys on your keyboard to generate entropy (bits = 1016).

Writing corosync key to /etc/corosync/authkey.

[root@node1 corosync]# ll

total 28

-rw-r--r-- 1 root root 5384 Jul 28 2010 amf.conf.example

-r-------- 1 root root 128 May 13 20:50 authkey

-rw-r--r-- 1 root root 556 May 13 20:49 corosync.conf

-rw-r--r-- 1 root root 436 Jul 28 2010 corosync.conf.example

drwxr-xr-x 2 root root 4096 Jul 28 2010 service.d

drwxr-xr-x 2 root root 4096 Jul 28 2010 uidgid.d

将node1节点上的文件拷贝到节点node2上面(记住要带-p)

[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/

authkey 100% 128 0.1KB/s 00:00

corosync.conf 100% 556 0.5KB/s 00:00

[root@node1 corosync]# ssh node2 'mkdir /var/log/cluster'

在node1和node2节点上面启动corosync 的服务:

[root@node1 corosync]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK ]

[root@node2 ~]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK ]

验证corosync引擎是否正常启动了

[root@node1 corosync]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages

Feb 8 16:33:15 localhost smartd[2928]: Opened configuration file /etc/smartd.conf

Feb 8 16:33:15 localhost smartd[2928]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 16:45:09 localhost smartd[3395]: Opened configuration file /etc/smartd.conf

Feb 8 16:45:09 localhost smartd[3395]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 17:13:11 localhost smartd[3386]: Opened configuration file /etc/smartd.conf

Feb 8 17:13:11 localhost smartd[3386]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Mar 19 23:33:34 localhost smartd[3385]: Opened configuration file /etc/smartd.conf

Mar 19 23:33:34 localhost smartd[3385]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 6 09:12:46 localhost smartd[3377]: Opened configuration file /etc/smartd.conf

May 6 09:12:46 localhost smartd[3377]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 6 15:37:51 localhost smartd[3408]: Opened configuration file /etc/smartd.conf

May 6 15:37:51 localhost smartd[3408]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 09:31:06 localhost smartd[3416]: Opened configuration file /etc/smartd.conf

May 8 09:31:06 localhost smartd[3416]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 09:47:42 node1 smartd[3353]: Opened configuration file /etc/smartd.conf

May 8 09:47:42 node1 smartd[3353]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 12:10:18 node1 corosync[7838]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.

May 8 12:10:18 node1 corosync[7838]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

查看初始化成员节点通知是否发出

[root@node1 corosync]# grep -i totem /var/log/messages

May 8 12:10:18 node1 corosync[7838]: [TOTEM ] Initializing transport (UDP/IP).

May 8 12:10:18 node1 corosync[7838]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

May 8 12:10:19 node1 corosync[7838]: [TOTEM ] The network interface [192.168.10.10] is now up.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] Process pause detected for 899 ms, flushing membership messages.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] A processor failed, forming new configuration.

May 8 12:10:24 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:10:32 node1 corosync[7838]: [TOTEM ] A processor failed, forming new configuration.

May 8 12:10:32 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:12:03 node1 corosync[7838]: [TOTEM ] A processor failed, forming new configuration.

May 8 12:12:03 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

May 8 12:12:15 node1 corosync[7838]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

检查过程中是否有错误产生

[root@node1 corosync]# grep -i error: /var/log/messages |grep -v unpack_resources

Feb 8 17:03:46 localhost kernel: sd 1:0:0:0: SCSI error: return code = 0x00010000

检查pacemaker时候已经启动了

[root@node1 corosync]# grep -i pcmk_startup /var/log/messages

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: CRM: Initialized

May 8 12:10:23 node1 corosync[7838]: [pcmk ] Logging: Initialized pcmk_startup

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: Service: 9

May 8 12:10:23 node1 corosync[7838]: [pcmk ] info: pcmk_startup: Local hostname: node1.a.com

node2

查看初始化成员节点通知是否发出:

[root@node2 ~]# grep -i totem /var/log/messages

May 8 12:12:16 node2 corosync[7817]: [TOTEM ] Initializing transport (UDP/IP).

May 8 12:12:16 node2 corosync[7817]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

May 8 12:12:16 node2 corosync[7817]: [TOTEM ] The network interface [192.168.10.20] is now up.

May 8 12:12:18 node2 corosync[7817]: [TOTEM ] A processor joined or left the membership and a new membership was formed.

查看corosync的状态:

[root@node2 ~]# service corosync status

corosync (pid 7817) is running...

验证corosync引擎是否正常启动了:

[root@node2 ~]# grep -i -e "corosync cluster engine" -e "configuration file" /var/log/messages

Feb 8 16:33:15 localhost smartd[2928]: Opened configuration file /etc/smartd.conf

Feb 8 16:33:15 localhost smartd[2928]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 16:45:09 localhost smartd[3395]: Opened configuration file /etc/smartd.conf

Feb 8 16:45:09 localhost smartd[3395]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Feb 8 17:13:11 localhost smartd[3386]: Opened configuration file /etc/smartd.conf

Feb 8 17:13:11 localhost smartd[3386]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

Mar 19 23:33:34 localhost smartd[3385]: Opened configuration file /etc/smartd.conf

Mar 19 23:33:34 localhost smartd[3385]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 6 09:18:50 localhost smartd[3379]: Opened configuration file /etc/smartd.conf

May 6 09:18:50 localhost smartd[3379]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 09:50:25 node2 smartd[3357]: Opened configuration file /etc/smartd.conf

May 8 09:50:26 node2 smartd[3357]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 8 12:12:16 node2 corosync[7817]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.

May 8 12:12:16 node2 corosync[7817]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.

检查过程中是否有错误产生:

[root@node2 ~]# grep -i error: /var/log/messages |grep -v unpack_resources

Feb 8 17:03:46 localhost kernel: sd 1:0:0:0: SCSI error: return code = 0x00010000

检查pacemaker时候已经启动了:

[root@node2 ~]# grep -i pcmk_startup /var/log/messages

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: CRM: Initialized

May 8 12:12:17 node2 corosync[7817]: [pcmk ] Logging: Initialized pcmk_startup

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: Service: 9

May 8 12:12:17 node2 corosync[7817]: [pcmk ] info: pcmk_startup: Local hostname: node2.a.com

node1

在任何一个节点上 查看集群的成员状态:

[root@node1 corosync]# crm status

============

Last updated: Tue May 8 13:08:41 2012

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

0 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

配置群集的工作属性

corosync默认启用了stonith,而当前集群并没有相应的stonith设备,因此此默认配置目前尚不可用,这可以通过如下命令先禁用stonith:

[root@node1 corosync]# crm configure property stonith-enabled=false

对于双节点的集群来说,我们要配置此选项来忽略quorum,即这时候票数不起作用,一个节点也能正常运行:

[root@node1 corosync]# crm configure property no-quorum-policy=ignore

定义资源的粘性值,使资源不能再节点之间随意的切换,因为这样是非常浪费系统的资源的。

资源黏性值范围及其作用:

0:这是默认选项。资源放置在系统中的最适合位置。这意味着当负载能力“较好”或较差的节点变得可用时才转移资源。此选项的作用基本等同于自动故障回复,只是资源可能会转移到非之前活动的节点上;

大于0:资源更愿意留在当前位置,但是如果有更合适的节点可用时会移动。值越高表示资源越愿意留在当前位置;

小于0:资源更愿意移离当前位置。绝对值越高表示资源越愿意离开当前位置;

INFINITY:如果不是因节点不适合运行资源(节点关机、节点待机、达到migration-threshold 或配置更改)而强制资源转移,资源总是留在当前位置。此选项的作用几乎等同于完全禁用自动故障回复;

-INFINITY:资源总是移离当前位置;

我们这里可以通过以下方式为资源指定默认黏性值:

[root@node1 corosync]# crm configure rsc_defaults resource-stickiness=100

定义集群服务及资源(node1)

查看当前集群的配置信息,确保已经配置全局属性参数为两节点集群所适用

[root@node1 corosync]# crm configure show

node node1.a.com

node node2.a.com

primitive webip ocf:heartbeat:IPaddr \

params ip="192.168.10.100"

primitive webserver lsb:httpd

group web webip webserver

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false" \

no-quorum-policy="ignore"

rsc_defaults $id="rsc-options" \

resource-stickiness="100"

将已经配置好的DRBD设备/dev/drbd0定义为集群服务;

[root@node1 corosync]# service drbd stop

Stopping all DRBD resources: .

[root@node1 corosync]# chkconfig drbd off

[root@node1 corosync]# drbd-overview

drbd not loaded

[root@node1 corosync]# ssh node2 "service drbd stop"

Stopping all DRBD resources: .

[root@node1 corosync]# ssh node2 "chkconfig drbd off"

[root@node1 corosync]# drbd-overview

drbd not loaded

配置drbd为集群资源:

提供drbd的RA目前由OCF归类为linbit,其路径为/usr/lib/ocf/resource.d/linbit/drbd。我们可以使用如下命令来查看此RA及RA的meta信息:

[root@node1 corosync]# crm ra classes

heartbeat

lsb

ocf / heartbeat linbit pacemaker

stonith

[root@node1 corosync]# crm ra list ocf linbit

drbd

查看drbd的资源代理的相关信息:

[root@node1 corosync]# crm ra info ocf:linbit:drbd

This resource agent manages a DRBD resource

as a master/slave resource. DRBD is a shared-nothing replicated storage

device. (ocf:linbit:drbd)

Master/Slave OCF Resource Agent for DRBD

Parameters (* denotes required, [] the default):

drbd_resource* (string): drbd resource name

The name of the drbd resource from the drbd.conf file.

drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf

Full path to the drbd.conf file.

Operations' defaults (advisory minimum):

start timeout=240

promote timeout=90

demote timeout=90

notify timeout=90

stop timeout=100

monitor_Slave interval=20 timeout=20 start-delay=1m

monitor_Master interval=10 timeout=20 start-delay=1m

drbd需要同时运行在两个节点上,但只能有一个节点(primary/secondary模型)是Master,而另一个节点为Slave;因此,它是一种比较特殊的集群资源,其资源类型为多状态(Multi-state)clone类型,

即主机节点有Master和Slave之分,且要求服务刚启动时两个节点都处于slave状态。

[root@node1 corosync]# crm

crm(live)# configure

crm(live)configure# primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" op monitor role="Master" interval="30s" op monitor role="Slave" interval="31s" op start timeout="240s" op stop timeout="100s"

crm(live)configure# ms MS_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify="true"

crm(live)configure# show mysqldrbd

primitive mysqldrbd ocf:heartbeat:drbd \

params drbd_resource="mysql" \

op monitor interval="30s" role="Master" \

op monitor interval="31s" role="Slave" \

op start interval="0" timeout="240s" \

op stop interval="0" timeout="100s"

crm(live)configure# show MS_mysqldrbd

ms MS_mysqldrbd mysqldrbd \

meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# exit

bye

[root@node1 corosync]# crm status

============

Last updated: Sun May 13 21:41:57 2012

Stack: openais

Current DC: node1.a.com - partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

2 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

Resource Group: web

webip (ocf::heartbeat:IPaddr): Started node1.a.com

webserver (lsb:httpd): Started node1.a.com

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Slaves: [ node2.a.com node1.a.com ]

[root@node1 corosync]# drbdadm role mysql

Secondary/Secondary

我们实现将drbd设置自动挂载至/mysqldata目录。此外,此自动挂载的集群资源需要运行于drbd服务的Master节点上,

并且只能在drbd服务将某节点设置为Primary以后方可启动。

[root@node1 corosync]# umount /dev/drbd0

umount: /dev/drbd0: not mounted

[root@node2 corosync]# umount /dev/drbd0

umount: /dev/drbd0: not mounted

[root@node1 corosync]# crm

crm(live)# configure

crm(live)configure# primitive MysqlFS ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mysqldata" fstype="ext3" op start timeout=60s op stop timeout=60s

crm(live)configure# commit

crm(live)configure# exit

bye

开始测试

mysql资源的定义(node1上操作)

先为mysql集群创建一个IP地址资源,通过集群提供服务时使用,这个地址就是客户端访问mysql服务器使用的ip地址:

[root@node1 corosync]# crm configure primitive myip ocf:heartbeat:IPaddr params ip=192.168.50.100

配置mysqld服务为高可用资源

[root@node1 corosync]# crm configure primitive mysqlserver lsb:mysqld

[root@node1 corosync]# crm status

[root@node1 ~]# crm status

============

Last updated: Sat May 12 15:40:57 2012

Stack: openais

Current DC: node1.a.com -partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Masters: [ node1.a.com ]

Slaves: [ node2.a.com ]

MysqlFS(ocf::heartbeat:Filesystem):Started node1.a.com

myip(ocf::heartbeat:IPaddr):Started node2.a.com

配置资源的各种约束:

集群拥有所有必需资源,但它可能还无法进行正确处理。资源约束则用以指定在哪些群集节点上运行资源,以何种顺序装载资源,以及特定资源依赖于哪些其它资源。pacemaker共给我们提供了三种资源约束方法:

1)Resource Location(资源位置):定义资源可以、不可以或尽可能在哪些节点上运行

2)Resource Collocation(资源排列):排列约束用以定义集群资源可以或不可以在某个节点上同时运行

3)Resource Order(资源顺序):顺序约束定义集群资源在节点上启动的顺序

定义约束时,还需要指定分数。各种分数是集群工作方式的重要组成部分。其实,从迁移资源到决定在已降级集群中停止哪些资源的整个过程是通过以某种方式修改分数来实现的。分数按每个资源来计算,资源分数为负的任何节点都无法运行该资源。在计算出资源分数后,

集群选择分数最高的节点。INFINITY(无穷大)目前定义为1,000,000。加减无穷大遵循以下3个基本规则:

1)任何值+ 无穷大= 无穷大

2)任何值-无穷大= -无穷大

3)无穷大-无穷大= -无穷大

定义资源约束时,也可以指定每个约束的分数。分数表示指派给此资源约束的值。分数较高的约束先应用,分数较低的约束后应用。通过使用不同的分数为既定资源创建更多位置约束,可以指定资源要故障转移至的目标节点的顺序。

我们要定义如下的约束:

[root@node1 ~]# crm

crm(live)# configure

crm(live)configure# colocation MysqlFS_with_mysqldrbd inf: MysqlFS MS_mysqldrbd:Master myip mysqlserver

crm(live)configure# order MysqlFS_after_mysqldrbd inf: MS_mysqldrbd:promote MysqlFS:start

crm(live)configure# order myip_after_MysqlFS mandatory: MysqlFS myip

crm(live)configure# order mysqlserver_after_myip mandatory: myip mysqlserver

验证是否有错

crm(live)configure# verify

提交:

crm(live)configure# commit

crm(live)configure# exit

bye

查看配置信息:

[root@node1 ~]# crm configure show

node node1.a.com

node node2.a.com

primitive MysqlFS ocf:heartbeat:Filesystem \

params device="/dev/drbd0" directory="/mysqldata" fstype="ext3" \

op start interval="0" timeout="60s" \

op stop interval="0" timeout="60s"

primitive myip ocf:heartbeat:IPaddr \

params ip="192.168.50.100"

primitive mysqldrbd ocf:heartbeat:drbd \

params drbd_resource="mysql" \

op monitor interval="30s" role="Master" \

op monitor interval="31s" role="Slave" \

op start interval="0" timeout="240s" \

op stop interval="0" timeout="100s"

primitive mysqlserver lsb:mysqld

ms MS_mysqldrbd mysqldrbd \

meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

colocation MysqlFS_with_mysqldrbd inf: MysqlFS MS_mysqldrbd:Master myip mysqlserver

order MysqlFS_after_mysqldrbd inf: MS_mysqldrbd:promote MysqlFS:start

order myip_after_MysqlFS inf: MysqlFS myip

order mysqlserver_after_myip inf: myip mysqlserver

property $id="cib-bootstrap-options" \

dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \

cluster-infrastructure="openais" \

expected-quorum-votes="2" \

stonith-enabled="false" \

no-quorum-policy="ignore"

rsc_defaults $id="rsc-options" \

resource-stickiness="100

查看运行状态:

[root@node1 ~]# crm status

============

Last updated: Sat May 13 22:49:26 2012

Stack: openais

Current DC: node1.a.com -partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Online: [ node1.a.com node2.a.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Masters: [ node1.a.com ]

Slaves: [ node2.a.com ]

MysqlFS(ocf::heartbeat:Filesystem):Started node1.a.com

myip(ocf::heartbeat:IPaddr):Started node1.a.com

mysqlserver(lsb:mysqld):Started node1.a.com

可见,服务现在在node1上正常运行:

在node1上的操作,查看mysql的运行状态:

[root@node1 ~]# service mysqld status

MySQL running (6578) [ OK ]

查看是否自动挂载

[root@node1 ~]# mount

/dev/sda3 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

none on /proc/fs/vmblock/mountPoint type vmblock (rw)

/dev/hdc on /media/RHEL_5.4 i386 DVD type iso9660 (ro,noexec,nosuid,nodev,uid=0)

查看目录

[root@node1 ~]# ls /mysqldata/

data f1 f2 lost+found

查看vip的状态

[root@node1 ~]# ifconfig

eth0 Link encap:Ethernet HWaddr 00:0C:29:B2:82:C4

inet addr:192.168.10.10 Bcast:192.168.10.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:feb2:82c4/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:122954 errors:0 dropped:0 overruns:0 frame:0

TX packets:826224 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:12821186 (12.2 MiB) TX bytes:1150412694 (1.0 GiB)

Interrupt:67 Base address:0x2024

eth0:0 Link encap:Ethernet HWaddr 00:0C:29:B2:82:C4

inet addr:192.168.50.100 Bcast:192.168.50.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:67 Base address:0x2024

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:7250 errors:0 dropped:0 overruns:0 frame:0

TX packets:7250 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:2936563 (2.8 MiB) TX bytes:2936563 (2.8 MiB)

继续测试:

在node1上操作,让node1下线:

[root@node1 ~]# crm node standby

查看集群运行的状态:

[root@node1 ~]# crm status

============

Last updated: Sat May 12 15:56:11 2012

Stack: openais

Current DC: node1.a.com -partition with quorum

Version: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f

2 Nodes configured, 2 expected votes

4 Resources configured.

============

Node node1.a.com: standby

Online: [ node2.a.com ]

Master/Slave Set: MS_mysqldrbd [mysqldrbd]

Masters: [ node2.a.com ]

Stopped: [ mysqldrbd:0 ]

MysqlFS(ocf::heartbeat:Filesystem):Started node2.a.com

myip(ocf::heartbeat:IPaddr):Started node2.a.com

mysqlserver(lsb:mysqld):Started node2.a.com

可见我们的资源已经都切换到了node2上:

查看node2的运行状态:

[root@node2 ~]# service mysqld status

MySQL running (7805) [ OK ]

查看目录

[root@node2 ~]# ls /mysqldata/

data f1 f2 lost+found

ok,现在一切正常,我们可以验证mysql服务是否能被正常访问:

我们定义的是通过VIP:192.168.50.100来访问mysql服务,现在node2上建立一个可以让某个网段主机能访问的账户(这个内容会同步drbd设备同步到node1上):

[root@node2 ~]# mysql

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 1

Server version: 5.5.15-log MySQL Community Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> grant all on *.* to test@'192.168.%.%' identified by '123456';

Query OK, 0 rows affected (0.01 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.06 sec)

mysql> exit

Bye

然后我们通过另一台主机进行访问:

[root@node1 ~]# mysql -u test -h 192.168.50.100

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 3

Server version: 5.5.15-log MySQL Community Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.