1 基本信息收集
1.1 查看机型
# model
9000/800/L3000-7x
1.2 查看cpu及内存使用情况
#top
System: portal1 Thu Jul 1 11:55:15 2004
Load averages: 0.00, 0.02, 0.03
143 processes: 135 sleeping, 8 running
Cpu states:
CPU LOAD USER NICE SYS IDLE BLOCK SWAIT INTR SSYS
0 0.00 0.0% 0.0% 0.2% 99.8% 0.0% 0.0% 0.0% 0.0%
1 0.00 0.0% 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 0.0%
--- ---- ----- ----- ----- ----- ----- ----- ----- -----
avg 0.00 0.0% 0.0% 0.2% 99.8% 0.0% 0.0% 0.0% 0.0%
Memory: 815620K (668292K) real, 1130440K (920468K) virtual, 573140K free Page# 1/5
CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU COMMAND
0 ? 23932 root -27 20 10848K 7032K run 6:15 0.68 0.68 cmcld
0 ? 35 root 152 20 1472K 1472K run 6:19 0.33 0.33 vxfsd
0 ? 1197 oracle 156 20 438M 19736K sleep 0:00 0.19 0.19 ora_smon_ptl1
… …
退出按’q’
1.3 如何查看内存
#dmesg
Jul 1 12:32
gate64: sysvec_vaddr = 0xc0002000 for 2 pages
NOTICE: autofs_link(): File system was registered at index 3.
NOTICE: cachefs_link(): File system was registered at index 5.
NOTICE: nfs3_link(): File system was registered at index 6.
0 sba
0/0 lba
… …
1.4 查看处理器位数
64
1.5 查看交换空间(swap)
# swapinfo -a
Kb Kb Kb PCT START/ Kb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4194304 0 4194304 0% 0 - 1 /dev/vg00/lvol2
reserve - 69224 -69224
memory 1572088 167116 1404972 11%
1.6 查看硬盘的大小信息
# diskinfo /dev/rdsk/c1t2d0
SCSI describe of /dev/rdsk/c1t2d0:
vendor: HP 36.4G
product id: MAS3367NC
type: direct access
size: 35566480 Kbytes
bytes per sector: 512查看操作系统版本和license
#uname -a
HP-UX scp1 B.11.00 U 9000/800 1124961527 unlimited-user license
1.7 查看硬盘的个数
# ioscan -funC disk
Class I H/W Path Driver S/W State H/W Type Description
========================================================================
disk 0 0/0/1/1.2.0 sdisk CLAIMED DEVICE HP 36.4GMAS3367NC
/dev/dsk/c1t2d0 /dev/rdsk/c1t2d0
disk 1 0/0/2/0.2.0 sdisk CLAIMED DEVICE HP 36.4GMAS3367NC
/dev/dsk/c2t2d0 /dev/rdsk/c2t2d0
disk 2 0/0/2/1.2.0 sdisk CLAIMED DEVICE HP DVD-ROM 305
/dev/dsk/c3t2d0 /dev/rdsk/c3t2d0
disk 3 0/8/0/0.8.0.110.0.0.0 sdisk CLAIMED DEVICE HP A6189B
/dev/dsk/c4t0d0 /dev/rdsk/c4t0d0
disk 4 0/8/0/0.8.0.110.0.0.1 sdisk CLAIMED DEVICE HP A6189B
/dev/dsk/c4t0d1 /dev/rdsk/c4t0d1
1.8 查看文件系统
# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 204800 43440 160120 21% /
/dev/vg00/lvol1 298928 51960 217072 19% /stand
/dev/vg00/lvol8 4710400 431352 4246240 9% /var
/dev/vg00/lvol6 2097152 1097128 992240 53% /usr
/dev/vg00/lvol5 1310720 633376 672696 48% /tmp
/dev/vg00/lvol4 8388608 4207536 4155184 50% /opt
/dev/vg00/lvol7 10485760 4115968 6320080 39% /home
存在两个文件中:/etc/fstab /etc/mnttab
1.9 查看PV
# pvdisplay /dev/dsk/c4t0d1
--- Physical volumes ---
PV Name /dev/dsk/c4t0d1
VG Name /dev/vg_data
PV Status available
Allocatable yes
VGDA 2
Cur LV 24
PE Size (Mbytes) 4
Total PE 10238
Free PE 3631
Allocated PE 6607
Stale PE 0
IO Timeout (Seconds) default
Autoswitch On
1.10 查看VG
# vgdisplay vg_data
--- Volume groups ---
VG Name /dev/vg_data
VG Write Access read/write
VG Status available, shared, server
Max LV 255
Cur LV 24
Open LV 24
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 10239
VGDA 2
PE Size (Mbytes) 4
Total PE 10238
Alloc PE 6607
Free PE 3631
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
1.11 查看LV
# lvdisplay /dev/vg_data/ring_data
--- Logical volumes ---
LV Name /dev/vg_data/ring_data
VG Name /dev/vg_data
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 10000
Current LE 2500
Allocated PE 2500
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default
1.12 查看机器名
# hostname
portal1
1.13 查看网络资源
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
0/0/0/0 0x00306EC32158 0 UP lan0 snap0 1 ETHER Yes 119
0/10/0/0 0x00306E4AA9FA 1 UP lan1 snap1 2 ETHER Yes 119
0/12/0/0 0x00306E4AA9D9 2 UP lan2 snap2 3 ETHER Yes 119
1.14 查看某一网卡
# ifconfig lan0
lan0: flags=843<UP,BROADCAST,RUNNING,MULTICAST>
inet 10.71.111.171 netmask ffffff80 broadcast 10.71.111.255
1.15 查看浮动IP
# netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lan0:1 1500 10.71.111.128 10.71.111.170 3 0 0 0 0
lan2* 1500 none none 0 0 0 0 0
lan1 1500 192.0.166.0 192.0.166.1 2099845 0 2760 0 0
lan0 1500 10.71.111.128 10.71.111.171 7876597 0 4918169 0 0
lo0 4136 127.0.0.0 127.0.0.1 535887 0 535887 0 0
查看VG
2 磁盘及文件系统配置
2.1 创建PV(格式化磁盘)
# pvcreate -f /dev/rdsk/cxtydz ------(x=instance, y=target, and z=unit)
2.2 创建VG
l 创建vg目录和特征文件group
#mkdir /dev/vg_data
# mknod /dev/vg_data/group c 64 0x060000
注意:group文件的辅助数字必须在系统所有节点上的vg中是唯一的。
# vgcreate /dev/vg_data /dev/dsk/c0t1d0 /dev/dsk/c1t0d0
l 扩展VG
# vgextend /dev/vg_data /dev/dsk/c1t0d1 /dev/dsk/c0t1d1
继续扩展VG,知道将所有需要的磁盘都扩展进去。
2.3 创建LV
#lvcreate -l 100 -n Name /dev/vg_data
lv创建的大小单位默认为PE,每个PE为4M。
2.4 创建文件系统
l 初始化文件系统
#newfs -F hfs /dev/vglock/rlv_oracle
l 创建文件系统挂接的目录
#mkdir /oracle
l 将文件系统挂接上去
#mount /dev/vglock/lv_oracle /oracle
2.5 激活VG
l 独占模式激活
1)去激活VG
#vgchange -a n /dev/vg_data
2)修改所有节点上VG的属性
#vgchange–c n /dev/vg_data
3)激活VG
#vgchange -a y /dev/vg_data
l 共享模式激活
1)去激活VG
#vgchange -a n /dev/vg_data
2)修改所有节点上VG的属性
#vgchange –S y –c y /dev/vg_data
3)激活所有节点VG
#vgchange -a s /dev/vg_data
2.6 去激活VG
#vgchange -a n /dev/vg_data
2.7 导出VG
l 去激活VG:
#vgchange -a n /dev/vg_data
l 创建VG映射文件:
#vgexport -v -p -s -m vgdatamap /dev/vg_data
l 拷贝映射文件导集群中的所有节点:
# rcp vgdatamap Node2:/tmp/vgdatamap
2.8 导入VG
l 在导入节点上创建含有特征文件group 的vg目录:
# mkdir /dev/vg_data
# mknod /dev/vg_data/group c 64 0x060000
注意:辅助数必须和导出节点的相同。
l 导入VG:
# vgimport -v -s -m /tmp/vgdatamap /dev/vg_data
l 检查设备是否已经导入:
# strings /etc/lvmtab
2.9 扩展LV
l 所有节点上去激活VG
#vgchange -a n /dev/vg_data
l 副节点上删除VG
#vgexport -p -v -m /tmp/vgdata.map /dev/vgdata
#vgexport -s -m /tmp/vgdata.map /dev/vgdata
l 主节点上独占模式激活VG
#vgchange–c n /dev/vg_data
#vgchange -a y /dev/vg_data
l 主节点上扩展LV
#lvextend -l 1000 /dev/vglock/ring_data (单位是PE)
l 主节点上导出VG
#vgexport -v -p -s -m vgdatamap /dev/vg_data
l 拷贝映射文件导集群中的副节点:
# rcp vgdatamap Node2:/tmp/vgdatamap
l 副节点上创建含有特征文件group 的vg目录:
# mkdir /dev/vg_data
# mknod /dev/vg_data/group c 64 0x060000
注意:辅助数必须和主节点的相同。
l 副节点上导入VG:
# vgimport -v -s -m /tmp/vgdatamap /dev/vg_data
2.10 扩展文件系统
l UNMOUNT文件系统
#umount /oracle
l 扩展LV:
#lvextend -l 50 /dev/vg00/lv_oracle
l 扩展文件系统
#extendfs /dev/vg00/lv_oracle
(如果是vxfs文件系统,则用#extendfs -F vxfs /dev/vg00/lv_oracle)
l MOUNT文件系统
#mount /dev/vg00/lv_oracle /oracle
2.11 扩展系统目录文件系统
l 以单用户进入HP-UX
1)重启,
#reboot
2)自检完成后,出现这一行“To discontinue , press any key in 10 seconds",这个时候按任意,系统终止启动,进入Main Menu”
3)键入“bo”,在系统询问“Interact with IPL(Y/N?)?”时,输入“y”。
4)在提示符ISL>之后,输入"hpux - is ",系统进入单用户状态(即维护模式)
l 扩展LV
#extendfs /dev/vg00/lv_oracle
(如果是vxfs文件系统,则用#extendfs -F vxfs /dev/vg00/lv_oracle)
l 切换到多用户状态
2.12 删除LV
#lvremove /dev/vglock/lv_informix
2.13 删除VG
l 去激活VG
#vgchange -a n /dev/vglock (如果不能够去激活,则可以用如下命令强行去激活,vgchange -c n /dev/vglock)
l 预删除VG
#vgexport -p -s -m /tmp/vglock.map /dev/vglock
l 删除VG
#vgexport -s -m /tmp/vglock.map /dev/vglock
3 网络配置
3.1 配置IP地址
l 修改netconf文件
#vi /etc/rc.config.d/netconf
# netconf: configuration values for core networking subsystems
#
# @(#)B.11.11_LR $Revision: 1.6.119.6 $ $Date: 97/09/10 15:56:01 $
#
# HOSTNAME: Name of your system for uname -S and hostname
#
# OPERATING_SYSTEM: Name of operating system returned by uname -s
# ---- DO NOT CHANGE THIS VALUE ----
#
# LOOPBACK_ADDRESS: Loopback address
# ---- DO NOT CHANGE THIS VALUE ----
#
# IMPORTANT: for 9.x-to-10.0 transition, do not put blank lines between
# the next set of statements
# 主机名、操作系统
HOSTNAME="portal1"
OPERATING_SYSTEM=HP-UX
LOOPBACK_ADDRESS=127.0.0.1
# Internet configuration parameters. See ifconfig(1m), autopush(1m)
#
# INTERFACE_NAME: Network interface name (see lanscan(1m))
#
# IP_ADDRESS: Hostname (in /etc/hosts) or IP address in decimal-dot
# notation (e.g., 192.1.2.3)
#
# SUBNET_MASK: Subnetwork mask in decimal-dot notation, if different
# from default
#
# BROADCAST_ADDRESS: Broadcast address in decimal-dot notation, if
# different from default
#
# INTERFACE_STATE: Desired interface state at boot time.
# either up or down, default is up.
#
# DHCP_ENABLE Determines whether or not DHCP client functionality
# will be enabled on the network interface (see
# auto_parms(1M), dhcpclient(1M)). DHCP clients get
# their IP address assignments from DHCP servers.
# 1 enables DHCP client functionality; 0 disables it.
#
# For each additional network interfaces, add a set of variable assignments
# like the ones below, changing the index to "[1]", "[2]" et cetera.
#
# IMPORTANT: for 9.x-to-10.0 transition, do not put blank lines between
# the next set of statements
# 第一块网卡IP配置
INTERFACE_NAME[0]=lan0
IP_ADDRESS[0]=10.71.111.171
SUBNET_MASK[0]=255.255.255.128
BROADCAST_ADDRESS[0]=""
INTERFACE_STATE[0]=""
DHCP_ENABLE[0]=0
# Internet routing configuration. See route(1m), routing(7)
#
# ROUTE_DESTINATION: Destination hostname (in /etc/hosts) or host or network
# IP address in decimal-dot notation, preceded by the word
# "host" or "net"; or simply the word "default".
#
# ROUTE_MASK: Subnetwork mask in decimal-dot notation, or C language
# hexadecimal notation. This is an optional field.
# A IP address, subnet mask pair uniquely identifies
# a subnet to be reached. If a subnet mask is not given,
# then the system will assign the longest subnet mask
# of the configured network interfaces to this route.
# If there is no matching subnet mask, then the system
# will assign the default network mask as the route's
# subnet mask.
#
# ROUTE_GATEWAY: Gateway hostname (in /etc/hosts) or IP address in
# decimal-dot notation. If local interface, must use the
# same form as used for IP_ADDRESS above (hostname or
# decimal-dot notation). If loopback interface, i.e.,
# 127.0.0.1, the ROUTE_COUNT must be set to zero.
#
# ROUTE_COUNT: An integer that indicates whether the gateway is a
# remote interface (one) or the local interface (zero)
# or loopback interface (e.g., 127.*).
#
# ROUTE_ARGS: Route command arguments and options. This variable
# may contain a combination of the following arguments:
# "-f", "-n" and "-p pmtu".
#
# For each additional route, add a set of variable assignments like the ones
# below, changing the index to "[1]", "[2]" et cetera.
#
# IMPORTANT: for 9.x-to-10.0 transition, do not put blank lines between
# the next set of statements
# 路由
ROUTE_DESTINATION[0]=default
ROUTE_MASK[0]=""
ROUTE_GATEWAY[0]="10.71.111.129"
ROUTE_COUNT[0]="1"
ROUTE_ARGS[0]=""
# Dynamic routing daemon configuration. See gated(1m)
#
# GATED: Set to 1 to start gated daemon.
# GATED_ARGS: Arguments to the gated daemon.
GATED=0
GATED_ARGS=""
#
# Router Discover Protocol daemon configuration. See rdpd(1m)
#
# RDPD: Set to 1 to start rdpd daemon
#
RDPD=0
#
# Reverse ARP daemon configuration. See rarpd(1m)
#
# RARP: Set to 1 to start rarpd daemon
#
# 网关
RARP=0
ROUTE_GATEWAY[0]=10.71.111.129
ROUTE_COUNT[0]=1
ROUTE_DESTINATION[0]=default
#
# 第二块网卡
INTERFACE_NAME[1]=lan1
IP_ADDRESS[1]=192.0.166.1
SUBNET_MASK[1]=255.255.255.0
BROADCAST_ADDRESS[1]=""
INTERFACE_STATE[1]=""
DHCP_ENABLE[1]=0
修改了该文件后,为了使修改生效,需要重启网络
l 或者用以下命令单独为某块网卡配置IP
#ifconfig lan0 110.71.111.171 255.255.255.128
#ifconfig lan1 192.0.166.1 255.255.255.0
3.2 IP、主机名称映射
#vi /etc/hosts
# @(#)B.11.11_LRhosts $Revision: 1.9.214.1 $ $Date: 96/10/08 13:20:01 $
#
# The form for each entry is:
# <internet address> <official hostname> <aliases>
#
# For example:
# 192.1.2.34 hpfcrm loghost
#
# See the hosts(4) manual page for more information.
# Note: The entries cannot be preceded by a space.
# The format described in this file is the correct format.
# The original Berkeley manual page contains an error in
# the format description.
#
10.71.111.171 portal1
127.0.0.1 localhost loopback
10.71.111.172 PORTAL2
10.71.111.142 aip1
10.71.111.140 aip2
#
192.0.166.1 portal1ht # heat beat of portal1
192.0.166.2 portal2ht # heat beat of portal2
3.3 信任关系
要在多节点上建立某个帐号的相互信任馆,编辑每个节点上该帐号的home目录下的的.rhosts文件
#vi /root/.rhosts
portal2 root
portal1 root
3.4 停止网络
#/sbin/rc2.d/S340net stop
3.5 启动网络
# /sbin/rc2.d/S340net start
3.6 激活网卡
#ifconfig lan0 up //激活网卡lan0
3.7 去激活
#ifconfig lan0 down //去激活网卡lan0
4 集群命令
4.1 查看MC/ServiceGuard版本
#cmversion
A.11.15.00
4.2 查看集群状态
#cmviewcl –v
CLUSTER STATUS
portal_cluster up
NODE STATUS STATE
portal1 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0/0 lan0
PRIMARY up 0/10/0/0 lan1
STANDBY up 0/12/0/0 lan2
NODE STATUS STATE
portal2 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0/0 lan0
STANDBY up 0/12/0/0 lan2
PRIMARY up 0/10/0/0 lan1
PACKAGE STATUS STATE AUTO_RUN NODE
testpkg up running enabled portal2
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 3 0 ORAMONITOR
Subnet up 10.71.111.128
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled portal1
Alternate up enabled portal2 (current)
4.3 配置集群
1. 创建一个集群配置模板:
#cmquerycl -n portal1 -n portal2 -v -C /etc/cmcluster/rac.asc
2. 编辑集群配置文件(rac.asc).
#vi /etc/cmculster/rac.asc
# **********************************************************************
# ****** HIGH AVAILABILITY CLUSTER CONFIGURATION FILE *************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the ServiceGuard manual. *********************
# **********************************************************************
# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.
# 集群名称
CLUSTER_NAME portal_cluster
# Cluster Lock Parameters
#
# The cluster lock is used as a tie-breaker for situations
# in which a running cluster fails, and then two equal-sized
# sub-clusters are both trying to form a new cluster. The
# cluster lock may be configured using either a lock disk
# or a quorum server.
#
# You can use either the quorum server or the lock disk as
# a cluster lock but not both in the same cluster.
#
# Consider the following when configuring a cluster.
# For a two-node cluster, you must use a cluster lock. For
# a cluster of three or four nodes, a cluster lock is strongly
# recommended. For a cluster of more than four nodes, a
# cluster lock is recommended. If you decide to configure
# a lock for a cluster of more than four nodes, it must be
# a quorum server.
# Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and
# FIRST_CLUSTER_LOCK_PV parameters to define a lock disk.
# The FIRST_CLUSTER_LOCK_VG is the LVM volume group that
# holds the cluster lock. This volume group should not be
# used by any other cluster as a cluster lock device.
# Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL,
# and QS_TIMEOUT_EXTENSION parameters to define a quorum server.
# The QS_HOST is the host name or IP address of the system
# that is running the quorum server process. The
# QS_POLLING_INTERVAL (microseconds) is the interval at which
# ServiceGuard checks to make sure the quorum server is running.
# The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase
# the time interval after which the quorum server is marked DOWN.
#
# The default quorum server timeout is calculated from the
# ServiceGuard cluster parameters, including NODE_TIMEOUT and
# HEARTBEAT_INTERVAL. If you are experiencing quorum server
# timeouts, you can adjust these parameters, or you can include
# the QS_TIMEOUT_EXTENSION parameter.
#
# For example, to configure a quorum server running on node
# "qshost" with 120 seconds for the QS_POLLING_INTERVAL and to
# add 2 seconds to the system assigned value for the quorum server
# timeout, enter:
#
# QS_HOST qshost
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000
# 锁盘
FIRST_CLUSTER_LOCK_VG /dev/vglock
# Definition of nodes in the cluster.
# Repeat node definitions as necessary for additional nodes.
# NODE_NAME is the specified nodename in the cluster.
# It must match the hostname and both cannot contain full domain name.
# Each NETWORK_INTERFACE, if configured with IPv4 address,
# must have ONLY one IPv4 address entry with it which could
# be either HEARTBEAT_IP or STATIONARY_IP.
# Each NETWORK_INTERFACE, if configured with IPv6 address(es)
# can have multiple IPv6 address entries(up to a maximum of 2,
# only one IPv6 address entry belonging to site-local scope
# and only one belonging to global scope) which must be all
# STATIONARY_IP. They cannot be HEARTBEAT_IP.
#节点1配置
NODE_NAME portal1
NETWORK_INTERFACE lan0
HEARTBEAT_IP 10.71.111.171
NETWORK_INTERFACE lan2
NETWORK_INTERFACE lan1
STATIONARY_IP 192.0.166.1
FIRST_CLUSTER_LOCK_PV /dev/dsk/c4t0d0
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0
# Possible standby Network Interfaces for lan0,lan1: lan2.
#节点2配置
NODE_NAME portal2
NETWORK_INTERFACE lan0
HEARTBEAT_IP 10.71.111.172
NETWORK_INTERFACE lan2
NETWORK_INTERFACE lan1
STATIONARY_IP 192.0.166.2
FIRST_CLUSTER_LOCK_PV /dev/dsk/c4t0d0
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0
# Possible standby Network Interfaces for lan0,lan1: lan2.
# Cluster Timing Parameters (microseconds).
# The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds).
# This default setting yields the fastest cluster reformations.
# However, the use of the default value increases the potential
# for spurious reformations due to momentary system hangs or
# network load spikes.
# For a significant portion of installations, a setting of
# 5000000 to 8000000 (5 to 8 seconds) is more appropriate.
# The maximum value recommended for NODE_TIMEOUT is 30000000
# (30 seconds).
#心跳间隔、超时设置
HEARTBEAT_INTERVAL 2000000
NODE_TIMEOUT 6000000
# Configuration/Reconfiguration Timing Parameters (microseconds).
#自动重启时间
AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000
# Package Configuration Parameters.
# Enter the maximum number of packages which will be configured in the cluster.
# You can not add packages beyond this limit.
# This parameter is required.
# 集群中的包数
MAX_CONFIGURED_PACKAGES 5
# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02
#LVM管理的VG,
#包应用脚本中的” vgchange -a e”将对这里列出的VG生效
VOLUME_GROUP /dev/vg_data
# List of OPS Volume Groups.
# Formerly known as DLM Volume Groups, these volume groups
# will be used by OPS or RAC cluster applications via
# the vgchange -a s command. (Note: the name DLM_VOLUME_GROUP
# is also still supported for compatibility with earlier versions.)
# For example:
# OPS_VOLUME_GROUP /dev/vgdatabase
# OPS_VOLUME_GROUP /dev/vg02
# ORACLE RAC使用的VG
OPS_VOLUME_GROUP /dev/vg_data
3. 检查集群配置:
# cmcheckconf -v -C /etc/cmcluster/rac.asc
4. 将配置文件分发(distribute)导集群中的所有节点。使配置生效。
# cmapplyconf -v -C /etc/cmcluster/rac.asc
4.4 启动集群
l 从任意节点启动集群:
# cmruncl -v
l 或者启动某一节点:
# cmrunnode -v portal1
4.5 停止集群
l 如果9i RAC实例已经启动运行,先将它们停止
l 在所有节点上,去激活卷组:
#vgchange -a n vg_data
l 在任意节点上中止集群:
# cmhaltcl –v
4.6 启动包
# cmrunpkg [-v] [-n node_name] package_name
4.7 停止包
# cmhaltpkg [-v] [-n node_name] package_name
4.8 修改包的切换属性
# cmmodpkg [-v] {-e | -d} [-n node_name]... package_name...
或者:
# cmmodpkg [-v] -R -s service_name package_name
4.9 为集群增加IP
# cmmodnet [-v] {-a | -r } -i <IP_Address> <subnet_name>
4.10 启动服务
# runserv [-v] [-r重启次数 | -R] 服务名 服务命令行
4.11 停止服务
# cmhaltserv [-v] service_name
5 附录
5.1 创建ORACLE RAC的LV的脚本
createserverlvs.sh:
pvcreate -f /dev/rdsk/c4t0d1
vgcreate vg_data /dev/dsk/c4t0d1
vgchange -a n vg_data
vgchange -c n vg_data
vgchange -a y vg_data
lvcreate -l 250 -n ora9_system vg_data
lvcreate -l 250 -n ora9_temp vg_data
lvcreate -l 125 -n ora9_rbs1 vg_data
lvcreate -l 125 -n ora9_rbs2 vg_data
lvcreate -l 20 -n ora9_user vg_data
lvcreate -l 20 -n ora9_index vg_data
lvcreate -l 15 -n ora9_tools vg_data
lvcreate -l 25 -n ora9_drsys vg_data
lvcreate -l 15 -n ora9_spfile vg_data
lvcreate -l 25 -n ora9_xdb vg_data
lvcreate -l 75 -n ora9_ctl1 vg_data
lvcreate -l 75 -n ora9_ctl2 vg_data
lvcreate -l 75 -n ora9_ctl3 vg_data
lvcreate -l 75 -n ora9_redo111 vg_data
lvcreate -l 75 -n ora9_redo112 vg_data
lvcreate -l 75 -n ora9_redo121 vg_data
lvcreate -l 75 -n ora9_redo122 vg_data
lvcreate -l 75 -n ora9_redo211 vg_data
lvcreate -l 75 -n ora9_redo212 vg_data
lvcreate -l 75 -n ora9_redo221 vg_data
lvcreate -l 75 -n ora9_redo222 vg_data
lvcreate -l 7500 -n ring_data vg_data
lvcreate -l 7500 -n ring_index vg_data
lvcreate -l 30 -n ora9_srvmconfig vg_data
vgchange -a n vg_data
vgexport -v -p -s -m /tmp/vgdatamap vg_data
rcp /tmp/vgdatamap portal2:/tmp/vgdatamap
chmod 777 /dev/vg_data
chmod 660 /dev/vg_data/r*
chown oracle:dba /dev/vg_data/r*
vgchange -a n vg_data
vgexport -p -s -m /tmp/tempmap vg_data
vgexport -s -m /tmp/tempmap vg_data
mkdir /dev/vg_data
mknod /dev/vg_data/group c 64 0x060000
vgimport -v -s -m /tmp/vgdatamap vg_data
chmod 777 /dev/vg_data
chmod 660 /dev/vg_data/r*
chown oracle:dba /dev/vg_data/r*
createclientlvs.sh:
$ECHO "create logic volumes for Oracle 9i RAC"
vgchange -a n vg_data
vgexport -p -s -m /tmp/tempmap vg_data
vgexport -s -m /tmp/tempmap vg_data
mkdir /dev/vg_data
mknod /dev/vg_data/group c 64 0x060000
使用方法:按实际情况修改脚本createserverlvs.sh中的节点名称、磁盘路径、LV大小等参数。在主节点上运行该脚本;在副节点运行脚本createclientlvs.sh
5.2 集群配置文件cmcluster.ascii