SUN CLUSTER 安装on solaris 10 X86

环境准备

使用vmware workstation

安装两个x86 solaris操作系统,hostname分别为cluster01 cluster02

三块网卡和交换,配置如下图

clip_image001

clip_image002



vnet配置如下

clip_image003



一、安装集群软件

在cluster01上安装集群软件

clip_image004

clip_image005

clip_image006

clip_image007

clip_image008

clip_image009

clip_image010

在cluster02上执行以上同样的集群安装操作

二、建立集群

在cluster01上面执行./scinstall

clip_image011

输入1

clip_image012

clip_image013

补充步骤

在每个集群节点上,用root用户登陆,添加 /.rhosts文件,文件内容为:

# cat /.rhosts

+

相互添加hosts,只添加221网段即可

clip_image014

clip_image015

继续,选在typical安装

clip_image016

输入集群的名字,此处命名为sap

clip_image017

输入节点的hostname

clip_image018

ctrl+d 结束输入

clip_image019

默认

>>> Authenticating Requests to Add Nodes <<<

Once the first node establishes itself as a single node cluster, other

nodes attempting to add themselves to the cluster configuration must

be found on the list of nodes you just provided. You can modify this

list by using claccess(1CL) or other tools once the cluster has been

established.

By default, nodes are not securely authenticated as they attempt to

add themselves to the cluster configuration. This is generally

considered adequate, since nodes which are not physically connected to

the private cluster interconnect will never be able to actually join

the cluster. However, DES authentication is available. If DES

authentication is selected, you must configure all necessary

encryption keys before any node will be allowed to join the cluster

(see keyserv(1M), publickey(4)).

Do you need to use DES authentication (yes/no) [no]?

默认

>>> Minimum Number of Private Networks <<<

Each cluster is typically configured with at least two private

networks. Configuring a cluster with just one private interconnect

provides less availability and will require the cluster to spend more

time in automatic recovery if that private interconnect fails.

Should this cluster use at least two private networks (yes/no) [yes]?

默认

>>> Point-to-Point Cables <<<

The two nodes of a two-node cluster may use a directly-connected

interconnect. That is, no cluster switches are configured. However,

when there are greater than two nodes, this interactive form of

scinstall assumes that there will be exactly one switch for each

private network.

Does this two-node cluster use switches (yes/no) [yes]?

默认

>>> Cluster Switches <<<

All cluster transport adapters in this cluster must be cabled to a

"switch". And, each adapter on a given node must be cabled to a

different switch. Interactive scinstall requires that you identify one

switch for each private network in the cluster.

What is the name of the first switch in the cluster [switch1]?

What is the name of the second switch in the cluster [switch2]?

选择第一个switch

>>> Cluster Transport Adapters and Cables <<<

Transport adapters are the adapters that attach to the private cluster

interconnect.

Select the first cluster transport adapter:

1) e1000g1

2) e1000g2

3) Other

Option: 1

Will this be a dedicated cluster transport adapter (yes/no) [yes]?

Adapter "e1000g1" is an Ethernet adapter.

Searching for any unexpected network traffic on "e1000g1" ... done

Verification completed. No traffic was detected over a 10 second

sample period.

The "dlpi" transport type will be set for this cluster.

Name of the switch to which "e1000g1" is connected [switch1]? e1000g1

Unknown switch.

Name of the switch to which "e1000g1" is connected [switch1]?

选择第二个switch

Each adapter is cabled to a particular port on a switch. And, each

port is assigned a name. You can explicitly assign a name to each

port. Or, for Ethernet and Infiniband switches, you can choose to

allow scinstall to assign a default name for you. The default port

name assignment sets the name to the node number of the node hosting

the transport adapter at the other end of the cable.

Use the default port name for the "e1000g1" connection (yes/no) [yes]?

Select the second cluster transport adapter:

1) e1000g1

2) e1000g2

3) Other

Option: 2

Will this be a dedicated cluster transport adapter (yes/no) [yes]?

Adapter "e1000g2" is an Ethernet adapter.

Searching for any unexpected network traffic on "e1000g2" ... done

Verification completed. No traffic was detected over a 10 second

sample period.

The "dlpi" transport type will be set for this cluster.

Name of the switch to which "e1000g2" is connected [switch2]?

Use the default port name for the "e1000g2" connection (yes/no) [yes]?

>>> Network Address for the Cluster Transport <<<

The cluster transport uses a default network address of 172.16.0.0. If

this IP address is already in use elsewhere within your enterprise,

specify another address from the range of recommended private

addresses (see RFC 1918 for details).

The default netmask is 255.255.240.0. You can select another netmask,

as long as it minimally masks all bits that are given in the network

address.

The default private netmask and network address result in an IP

address range that supports a cluster with a maximum of 32 nodes, 10

private networks, and 12 virtual clusters.

Is it okay to accept the default network address (yes/no) [yes]?

Is it okay to accept the default netmask (yes/no) [yes]?

Plumbing network address 172.16.0.0 on adapter e1000g1 >> NOT DUPLICATE ... done

Plumbing network address 172.16.0.0 on adapter e1000g2 >> NOT DUPLICATE ... done

默认

>>> Global Devices File System <<<

Each node in the cluster must have a local file system mounted on

/global/.devices/node@ before it can successfully participate

as a cluster member. Since the "nodeID" is not assigned until

scinstall is run, scinstall will set this up for you.

You must supply the name of either an already-mounted file system or a

raw disk partition which scinstall can use to create the global

devices file system. This file system or partition should be at least

512 MB in size.

Alternatively, you can use a loopback file (lofi), with a new file

system, and mount it on /global/.devices/node@.

If an already-mounted file system is used, the file system must be

empty. If a raw disk partition is used, a new file system will be

created for you.

If the lofi method is used, scinstall creates a new 100 MB file system

from a lofi device by using the file /.globaldevices. The lofi method

is typically preferred, since it does not require the allocation of a

dedicated disk slice.

The default is to use lofi.

Is it okay to use this default (yes/no) [yes]?

三、配置共享存储及表决盘

使用openfiler定义iscsi共享存储

clip_image020



检查solaris pkg 安装情况

# pkginfo SUNWiscsiu SUNWiscsir

system SUNWiscsir Sun iSCSI Device Driver (root)

system SUNWiscsiu Sun iSCSI Management Utilities (usr)

分别执行以下命令

iscsiadm add static-config iqn.2006-01.com.openfiler:tsn.b44c980dd213,192.168.221.99:3260

iscsiadm modify discovery -s enable

devfsadm -i iscsi

iscsiadm list target



若要移除iscsi,执行下面命令

iscsiadm remove discovery-address 192.168.221.99:3260



查看共享设备

# ./scdidadm -L

1 cluster01:/dev/rdsk/c0d0 /dev/did/rdsk/d1

1 cluster02:/dev/rdsk/c0d0 /dev/did/rdsk/d1

#

#


上面发现共享盘没有认到、执行扫描共享设备

# ./scgdevs

Configuring DID devices

/usr/cluster/bin/scdidadm: Inquiry on device "/dev/rdsk/c0d0s2" failed.

did instance 2 created.

did subpath cluster01:/dev/rdsk/c3t1d0 created for instance 2.

Configuring the /dev/global directory (global devices)

obtaining access to all attached disks



重新查看共享设备

# ./scdidadm -L

1 cluster01:/dev/rdsk/c0d0 /dev/did/rdsk/d1

1 cluster02:/dev/rdsk/c0d0 /dev/did/rdsk/d1

2 cluster01:/dev/rdsk/c3t1d0 /dev/did/rdsk/d2

2 cluster02:/dev/rdsk/c3t1d0 /dev/did/rdsk/d2

重新认可以认到了,只在1上执行一遍,两个节点都能认到了。



配置表决盘

# ./scsetup

>>> Initial Cluster Setup <<<

This program has detected that the cluster "installmode" attribute is

still enabled. As such, certain initial cluster setup steps will be

performed at this time. This includes adding any necessary quorum

devices, then resetting both the quorum vote counts and the

"installmode" property.

Please do not proceed if any additional nodes have yet to join the

cluster.

Is it okay to continue (yes/no) [yes]? yes

Do you want to add any quorum devices (yes/no) [yes]? yes

Following are supported Quorum Devices types in Oracle Solaris

Cluster. Please refer to Oracle Solaris Cluster documentation for

detailed information on these supported quorum device topologies.

What is the type of device you want to use?

1) Directly attached shared disk

2) Network Attached Storage (NAS) from Network Appliance

3) Quorum Server

q) Return to the quorum menu

Option: 1

>>> Add a SCSI Quorum Disk <<<

A SCSI quorum device is considered to be any Oracle Solaris Cluster

supported attached storage which connected to two or more nodes of the

cluster. Dual-ported SCSI-2 disks may be used as quorum devices in

two-node clusters. However, clusters with more than two nodes require

that SCSI-3 PGR disks be used for all disks with more than two

node-to-disk paths.

You can use a disk containing user data or one that is a member of a

device group as a quorum device.

For more information on supported quorum device topologies, see the

Oracle Solaris Cluster documentation.

Is it okay to continue (yes/no) [yes]? yes

Which global device do you want to use (d)? d2

Is it okay to proceed with the update (yes/no) [yes]? yes

scconf -a -q globaldev=d2

Command completed successfully.

Press Enter to continue:

Do you want to add another quorum device (yes/no) [yes]? no

Once the "installmode" property has been reset, this program will skip

"Initial Cluster Setup" each time it is run again in the future.

However, quorum devices can always be added to the cluster using the

regular menu options. Resetting this property fully activates quorum

settings and is necessary for the normal and safe operation of the

cluster.

Is it okay to reset "installmode" (yes/no) [yes]? yes

scconf -c -q reset

scconf -a -T node=.

Cluster initialization is complete.

Type ENTER to proceed to the main menu:

配置表决盘成功



查看cluster状态

# ./scstat -p

------------------------------------------------------------------

-- Cluster Nodes --

Node name Status

--------- ------

Cluster node: cluster01 Online

Cluster node: cluster02 Online

------------------------------------------------------------------

-- Cluster Transport Paths --

Endpoint Endpoint Status

-------- -------- ------

Transport path: cluster01:e1000g2 cluster02:e1000g2 Path online

Transport path: cluster01:e1000g1 cluster02:e1000g1 Path online

------------------------------------------------------------------

-- Quorum Summary from latest node reconfiguration --

Quorum votes possible: 3

Quorum votes needed: 2

Quorum votes present: 3

-- Quorum Votes by Node (current status) --

Node Name Present Possible Status

--------- ------- -------- ------

Node votes: cluster01 1 1 Online

Node votes: cluster02 1 1 Online

-- Quorum Votes by Device (current status) --

Device Name Present Possible Status

----------- ------- -------- ------

Device votes: /dev/did/rdsk/d2s2 1 1 Online

------------------------------------------------------------------

suncluster基本配置完成,具体根据资源配置需要进一步配置,如for oracle 、for sap等等

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/27771627/viewspace-1292968/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/27771627/viewspace-1292968/

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值