-------------------------------双机--------------------------------
1)首先配置IP环境 一个节点配置一个浮动(虚拟)IP /etc/hosts文件中配置一下内容,node1 和node2相同配置
node1
127.0.0.1 localhost
192.168.0.101 node1 loghost 主网卡 (命名:1表示node1 1表示主网卡)
192.168.0.102 node1_qfe0 备份网卡 2 node2 2 备份网卡
192.168.0.111 node1_float 虚拟IP
192.168.0.201 node2
192.168.0.202 node2_qfe0
192.168.0.222 node2_float
192.168.0.88 test_lh 资源主机名
node2
192.168.0.101 node1 loghost
192.168.0.102 node1_qfe0
192.168.0.111 node1_float
192.168.0.201 node2
192.168.0.202 node2_qfe0
192.168.0.222 node2_float
192.168.0.88 test_lh 资源主机名
2)手工建立/.rhosts
+ 表示所有主机都可以信任
3)将时间同步 在node2上执行下列命名,以node2为基准
svcs -a | grep time 查看与time有关的服务是否启动
svcadm enable svc:/network/time:dgram
svcadm enable svc:/network/time:stream 以上两个服务必须开启才能完成NTP
svcs -a | grep meta 查看后启动所有服务
在node1上执行:rdate node2 同步时间 同步对象节点2 以2为基准
查看node1,node2是否同步 date
4)配置IPMP
node1:上配置主网卡,备份网卡
/etc/hostname.eri0
192.168.0.101 group ipmp0 up
addif 192.168.0.111 -failover deprecated up
/etc/hostname.qfe0
192.168.0.102 group inpm0 -failover deprecated up
node2:相同配置
/etc/hostname.eri0
192.168.0.201 group ipmp0 up
addif 192.168.0.222 -failover deprecated up
/etc/hostname.qfe0
192.168.0.202 group inpm0 -failover deprecated up
5)配置完成后reboot
6)建立/globaldevices 文件系统 在/etc/vfstab 中定义自动挂载(全局设备文件系统)
format c1t0d0 划分512M 给 S6 node1 node2 上
预留/globaldevices 512M for cluster *c1t0d0s6 建立空间
sds需要独立开辟一个分区空间用来存储配置信息 本地硬盘上node1 node2
format : 30M c1t0d0s7
metadb -afc 3 /dev/rdsk/c1t0d0s7
metadb 查看
newfs /dev/rdsk/c1t0d0s6 建立文件系统
mkdir /globaldevices 建立挂载点
/etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c1t0d0s1 - - swap - no -
/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no -
/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /var ufs 1 no -
/devices - /devices devfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
/dev/dsk/c1t0d0s6 /dev/rdsk/c1t0d0s6 /globaldevices ufs 1 no - 添加自动挂载,装了cluster后自动注释掉
打最新的补丁集
7)安装 cluster 软件 安装目录:/usr/cluster 解压:unzip suncluster-3_2-ga-solaris-sparc.zip .\安装
安装过程先不配置 选择手动配置
8)将 PATH=$PATH:/opt/SUNWcluster/bin:/usr/cluster/bin:/usr/cluster/lib/sc:/usr/cluster/dtk/bin
export PATH 加入 (此步骤可以不做)
修改这个/etc/profile将cluster命令在全局环境下可使用
#ident "@(#)profile 1.19 01/03/13 SMI" /* SVr4.0 1.3 */
# The profile that all logins get before using their own .profile.
trap "" 2 3
export LOGNAME PATH
if [ "$TERM" = "" ]
then
if /bin/i386
then
TERM=sun-color
else
TERM=sun
fi
export TERM
fi
# Login and -su shells get /etc/profile services.
# -rsh is given its environment in its .profile.
case "$0" in
-sh | -ksh | -jsh | -bash)
if [ ! -f .hushlogin ]
then
/usr/sbin/quota
# Allow the user to break the Message-Of-The-Day only.
trap "trap '' 2" 2
/bin/cat -s /etc/motd
trap "" 2
/bin/mail -E
case $? in
0)
echo "You have new mail."
;;
2)
echo "You have mail."
;;
esac
fi
esac
umask 022
trap 2 3
PATH=$PATH:/opt/SUNWcluster/bin:/usr/cluster/bin:/usr/cluster/lib/sc:/usr/cluster/dtk/bin
export PATH
9)配置 node1,node2加入cluster中
scinstall
bash-3.00# sc pwd
/usr/cluster/bin
bash-3.00# ls
bash-3.00# ./scinstall
1创建一个新的节点,是先重启对端
节点后重启当前节点
2创建第一个节点,配置完成后会重启当前节点
禁用自动选择仲裁设备(默认不禁用)
配置第二个节点
加入已有的群集
双机配置结束
10)加仲裁盘
format vol g 标记仲裁盘
scsetup 加仲裁盘
或运行:*scconf -a -q globaldev=d4 (共享盘上选择仲裁盘)
11)创建共享卷组
scdidadm -L (查看全局设备)刷新设备使设备变成全局设备
1 node1:/dev/rdsk/c0t6d0 /dev/did/rdsk/d1
2 node1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
3 node1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d3
4 node1:/dev/rdsk/c3t67d0 /dev/did/rdsk/d4
4 node2:/dev/rdsk/c3t67d0 /dev/did/rdsk/d4 仲裁盘
5 node1:/dev/rdsk/c3t65d0 /dev/did/rdsk/d5
5 node2:/dev/rdsk/c3t65d0 /dev/did/rdsk/d5 d5 ,d6 共享盘加入test_ data共享组
6 node1:/dev/rdsk/c3t68d0 /dev/did/rdsk/d6
6 node2:/dev/rdsk/c3t68d0 /dev/did/rdsk/d6
7 node1:/dev/rdsk/c3t66d0 /dev/did/rdsk/d7
7 node2:/dev/rdsk/c3t66d0 /dev/did/rdsk/d7
8 node1:/dev/rdsk/c3t70d0 /dev/did/rdsk/d8
8 node2:/dev/rdsk/c3t70d0 /dev/did/rdsk/d8
9 node1:/dev/rdsk/c3t81d0 /dev/did/rdsk/d9
9 node2:/dev/rdsk/c3t81d0 /dev/did/rdsk/d9
10 node1:/dev/rdsk/c3t69d0 /dev/did/rdsk/d10
10 node2:/dev/rdsk/c3t69d0 /dev/did/rdsk/d10
11 node1:/dev/rdsk/c3t80d0 /dev/did/rdsk/d11
11 node2:/dev/rdsk/c3t80d0 /dev/did/rdsk/d11
12 node1:/dev/rdsk/c3t82d0 /dev/did/rdsk/d12
12 node2:/dev/rdsk/c3t82d0 /dev/did/rdsk/d12
13 node2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d13
14 node2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d14
创建共享卷组挂载点
mkdir /test
metaset -s test_data -a -h node1 node2 创建共享组名
metaset -s test_data -a /dev/did/rdsk/d5 /dev/did/rdsk/d6 将共享磁盘加入共享组
scswitch -z -D test_data -h node2 切换主节点
建文件系统
metainit -s test_data d50 1 1 /dev/did/rdsk/d5s0
cd /dev/md
newfs /dev/md/test_data/dsk/d50 自带的逻辑卷管理
newfs /dev/vx/rdsk/rootdg/lvtest veritas 逻辑卷管理
修改vfstab文件 将共享文件系统写入自动挂载 sds与veritas同样要做这步
手动验证两个节点是否可以手动挂载 只能挂载于设备组的主节点(node1),不能同时挂载两个节点上
/etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c1t0d0s1 - - swap - no -
/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no -
/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /var ufs 1 no -
/devices - /devices devfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
#/dev/dsk/c1t0d0s6 /dev/rdsk/c1t0d0s6 /globaldevices ufs 1 no -
/dev/did/dsk/d2s6 /dev/did/rdsk/d2s6 /global/.devices/node@1 ufs 2 no global 添加由cluster软件管理的did全局设备
自动挂载设备被#注释掉
切换节点看是否能自动挂载:scswitch -z -D test_data -h node2 由node2节点挂载全局文件系统
12)scsetup添加资源组(test)
1.逻辑主机名test_lh
2.首先定义逻辑主机名与应用IP 的对应关系 /etc/hosts文件中
3.3HA系统资源
4.注册server资源 HA NFS 服务启停由脚本控制
5.注册listener资源
scswitch -z -g testrg -h node2 切换资源组到另一个节点
转载于:https://blog.51cto.com/lijunzong/333766