solaris下安装oracle

Solaris 9 disk suite cluster3.1 oracle10g 双机安装文档
一、 硬件配置 
Sun Fire V890 Server                  2台 
2G b PCI 单边光纤主机适配器         2块 
四口10/100/1000自适应以太网卡       2块 
EMC  CX500 
二、 软件配置 
Solaris 9(9/05)  
Sun Cluster 3.1(9/04) 
Oracle 10g (10.2.0.1.0) 
Sun补丁(9_Recommended 17/04/06) 
三、 操作系统安装 
    ①系统盘(146GB)分区: 
0    /                  123006MB 
1    swap              16009MB 
2                      整个硬盘 
3                     (未用分区) 
4                     (未用分区) 
5                     (未用分区) 
6    /globaldevices       516MB 
7                      109MB 
②主机名命名、IP设置、子网掩码 
    sys-1  192.168.22.14   255.255.255.0 
    注:IPMP  测试地址 192.168.22.16 ,192.168.22.18 
        oracle  对外地址 192.168.22.17 
    sys-1  192.168.22.15   255.255.255.0 
    注:IPMP  测试地址 192.168.22.19 ,192.168.22.20 
        oracle  对外地址 192.168.22.17 
    心跳网口   ce0 ce1 
③补丁安装 
    9_Recommended 17/04/06 
④修改内核参数 
  在sys-1和sys-2中的/etc/system文件中加入如下: 
    set shmsys:shminfo_shmmax=4294967295 
set semsys:seminfo_semmap=1024 
set semsys:seminfo_semmni=2048 
set semsys:seminfo_semmns=2048 
set semsys:seminfo_semmsl=2048 
set semsys:seminfo_semmnu=2048 
set semsys:seminfo_semume=200 
set shmsys:shminfo_shmmin=200 
set shmsys:shminfo_shmmni=200 
set shmsys:shminfo_shmseg=200 
set semsys:seminfo_semvmx=32767 
set noexec_user_stack=1 
set noexec_user_stack_log=1 
set ce:ce_reclaim_pending=1 
set ce:ce_taskq_disable=1 
注:set ce:ce_reclaim_pending=1对ce网卡在NAFO中的bug进行修正。 
⑤HBA卡SG-XPCI1FC-QF2驱动程序安装 
    Download.sun.com下载HBA卡驱动SAN_4.4.9_install_it.tar.Z分别在sys-1和sys-2上做如下操作: 
      # compress -dc SAN_4.4.9_install_it.tar.Z |tar xvf - 
      #./install_it 
Logfile /var/tmp/install_it_Sun_StorEdge_SAN.log : created on Thu Apr 20 11:12:53 CST 2006  


This routine installs the packages and patches that 
make up Sun StorEdge SAN. 

Would you like to continue with the installation?  
 [y,n,?] y 


Verifying system... 


Checking for incompatiable SAN patches 


Begin installation of SAN software 

Installing StorEdge SAN packages - 

         Package SUNWsan        : Installed Successfully. 
         Package SUNWcfpl       : Installed Successfully. 
         Package SUNWcfplx      : Installed Successfully. 
         Package SUNWcfclr      : Installed Successfully. 
         Package SUNWcfcl       : Installed Successfully. 
         Package SUNWcfclx      : Installed Successfully. 
         Package SUNWfchbr      : Installed Successfully. 
         Package SUNWfchba      : Installed Successfully. 
         Package SUNWfchbx      : Installed Successfully. 
         Package SUNWfcsm       : Installed Successfully. 
         Package SUNWfcsmx      : Installed Successfully. 
         Package SUNWmdiu       : Installed Successfully. 
         Package SUNWjfca       : Installed Successfully. 
         Package SUNWjfcax      : Installed Successfully. 
         Package SUNWjfcau      : Installed Successfully. 
         Package SUNWjfcaux     : Installed Successfully. 
         Package SUNWemlxs      : Installed Successfully. 
         Package SUNWemlxsx     : Installed Successfully. 
         Package SUNWemlxu      : Installed Successfully. 
         Package SUNWemlxux     : Installed Successfully. 

StorEdge SAN packages installation completed. 

Begin patch installation 
        Patch 111847-08         : Installed Successfully. 
        Patch 113046-01         : Installed Previously. 
        Patch 113049-01         : Installed Previously. 
        Patch 113039-13         : Installed Successfully. 
        Patch 113040-18         : Installed Successfully. 
        Patch 113041-11         : Installed Successfully. 
        Patch 113042-14         : Installed Successfully. 
        Patch 113043-12         : Installed Successfully. 
        Patch 113044-05         : Installed Successfully. 
        Patch 114476-07         : Installed Successfully. 
        Patch 114477-03         : Installed Successfully. 
        Patch 114478-07         : Installed Successfully. 
        Patch 114878-10         : Installed Successfully. 
        Patch 119914-08         : Installed Successfully. 


Installation of Sun StorEdge SAN completed Successfully 

------------------------------------------- 
------------------------------------------- 
        Please reboot your system. 
------------------------------------------- 
------------------------------------------- 
⑥设置OK下local-mac-address? True 
  分别在sys-1和sys-2上做如下操作: 
  OK setenv local-mac-address? True 
  OK reset-all 
⑦编辑/etc/hosts文件 
  在sys-1和sys-2上编辑成如下: 
  127.0.0.1       localhost 
192.168.22.14   sys-1    loghost  
192.168.22.15   sys-2 
192.168.22.17   oracle

[ 本帖最后由 阿毛~ 于 2006-4-25 16:21 编辑 ]


 阿毛~ 回复于:2006-04-25 16:14:23

四、 安装SSH 
注:分别在sys-1和sys-2上做如下操作: 
下载软件 
gcc-3.3.2-sol9-sparc-local 
make-3.80-sol9-sparc-local 
ssh-3.2.5.tar 
安装GCC、MAKE软件 
#pkgadd -d gcc* 
#pkgadd -d make* 
③  编辑/.profile 
# cp /etc/skel/local.profile /.profile 
# vi /.profile 
加入: 
PATH=/usr/bin:/sbin:/usr/local/bin:/usr/loal/sbin:/usr/ccs/bin:/usr/sbin:/usr/openwin/bin/bin:/usr/ucb:/etc:. 
export PATH 
# . /.profile (使/.profile生效) 
编译安装SSH 
#tar xvf ssh-3.2.5.tar 
#cd ssh* 
#./configure 
#make 
#make install 
    ④ 生成密匙 
#ssh-keygen2 -b 1024 输入用户名root及密码 
    ⑤ 启动sshd2 
# /usr/local/sbin/sshd2 
⑥ 设置ssh2自动启动 
    #vi /etc/rc2.d/S99local 加入/usr/local/sbin/sshd2 
    
五、 配置IPMP 
注:分别在sys-1和sys-2上做如下操作: 
使用eri0和ce3来配置IPMP 
修改/etc/hostname.eri0 
#vi /etc/hostname.eri0加入 
192.168.22.14 netmask + broadcast + group xxml up addif 192.168.22.16 deprecated -failover netmask + broadcast + up 
② 修改/etc/hostname.ce3 
#vi /etc/hostname.ce3加入 
    192.168.22.18 netmask + broadcast + group xxml up deprecated -failover standby up 
③ 加入默认网关 
   # vi /etc/defaultrouter 
   加入192.168.22.10 
   # ping 192.168.22.10 
   192.168.22.10 alive 
说明:必须加入默认网关,并且主机可以ping通网关,IPMP才可使用 
六、安装Sun Cluster 3.1软件 
注:分别在sys-1和sys-2上做如下操作: 
安装Sun Web Console 
#cd /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_web_console/2.1 
#./setup 
……….. 

Installation complete. 

Server not started! No management applications registered. 
安装Sun Cluster3.1软件 
#cd /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Prudct/sun_cluster 
#./installer 
  
   点击Next 
    

   点击next 

      

   点击Next 
     
      

点击 install now 
   
点击exit完成安装 

七、 建立节点 
以sys-1作为第一节点建立cluster 
#cd/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools 
#./scinstall 
  *** Main Menu *** 

    Please select from one of the following (*) options: 

      * 1) Install a cluster or cluster node 
        2) Configure a cluster to be JumpStarted from this install server 
        3) Add support for new data services to this cluster node 
      * 4) Print release information for this cluster node 

      * ?) Help with menu options 
      * q) Quit 

    Option:  1 

_[H_[J 
  *** Install Menu *** 

    Please select from any one of the following options: 

        1) Install all nodes of a new cluster 
        2) Install just this machine as the first node of a new cluster 
        3) Add this machine as a node in an existing cluster 

        ?) Help with menu options 
        q) Return to the Main Menu 

    Option:  2 

_[H_[J 
  *** Installing just the First Node of a New Cluster *** 


    This option is used to establish a new cluster using this machine as  
    the first node in that cluster. 

    Once the cluster framework software is installed, you will be asked  
    for the name of the cluster. Then, you will have the opportunity to  
    run sccheck(1M) to test this machine for basic Sun Cluster  
    pre-configuration requirements. 

    After sccheck(1M) passes, you will be asked for the names of the  
    other nodes which will initially be joining that cluster. In  
    addition, you will be asked to provide certain cluster transport  
    configuration information. 

    Press Control-d at any time to return to the Main Menu. 


    Do you want to continue (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Software Patch Installation <<

    If there are any Solaris or Sun Cluster patches that need to be added  
    as part of this Sun Cluster installation, scinstall can add them for  
    you. All patches that need to be added must first be downloaded into  
    a common patch directory. Patches can be downloaded into the patch  
    directory either as individual patches or as patches grouped together  
    into one or more tar, jar, or zip files. 

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to install patches for you (yes/no) [yes]?   

    What is the name of the patch directory?  /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages 

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to use a patch list file (yes/no) [no]?   

_[H_[J_[H_[J 
  >>> Cluster Name <<

    Each cluster has a name assigned to it. The name can be made up of  
    any characters other than whitespace. Each cluster name should be  
    unique within the namespace of your enterprise. 

    What is the name of the cluster you want to establish?  test 

_[H_[J_[H_[J 
  >>> Check <<

    This step allows you to run sccheck(1M) to verify that certain basic  
    hardware and software pre-configuration requirements have been met.  
    If sccheck(1M) detects potential problems with configuring this  
    machine as a cluster node, a report of failed checks is prepared and  
    available for display on the screen. Data gathering and report  
    generation can take several minutes, depending on system  
    configuration. 

    Do you want to run sccheck (yes/no) [yes]?  no

[ 本帖最后由 阿毛~ 于 2006-4-25 16:23 编辑 ]

 阿毛~ 回复于:2006-04-25 16:15:02

_[H_[J_[H_[J 
  >>> Cluster Nodes <<

    This Sun Cluster release supports a total of up to 16 nodes. 

    Please list the names of the other nodes planned for the initial  
    cluster configuration. List one node name per line. When finished,  
    type Control-D: 

    Node name:  sys-1 
    Node name:  sys-2 
    Node name (Control-D to finish):  ^D__ 


    This is the complete list of nodes: 

        sys-1 
        sys-2 

    Is it correct (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Authenticating Requests to Add Nodes <<

    Once the first node establishes itself as a single node cluster,  
    other nodes attempting to add themselves to the cluster configuration  
    must be found on the list of nodes you just provided. You can modify  
    this list using scconf(1M) or other tools once the cluster has been  
    established. 

    By default, nodes are not securely authenticated as they attempt to  
    add themselves to the cluster configuration. This is generally  
    considered adequate, since nodes which are not physically connected  
    to the private cluster interconnect will never be able to actually  
    join the cluster. However, DES authentication is available. If DES  
    authentication is selected, you must configure all necessary  
    encryption keys before any node will be allowed to join the cluster  
    (see keyserv(1M), publickey(4)). 

    Do you need to use DES authentication (yes/no) [no]?   

_[H_[J_[H_[J 
  >>> Network Address for the Cluster Transport <<

    The private cluster transport uses a default network address of  
    172.16.0.0. But, if this network address is already in use elsewhere  
    within your enterprise, you may need to select another address from  
    the range of recommended private addresses (see RFC 1597 for details). 

    If you do select another network address, bear in mind that the Sun  
    Cluster software requires that the rightmost two octets always be  
    zero. 

    The default netmask is 255.255.0.0. You can select another netmask,  
    as long as it minimally masks all bits given in the network address. 

    Is it okay to accept the default network address (yes/no) [yes]?   

    Is it okay to accept the default netmask (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Point-to-Point Cables <<

    The two nodes of a two-node cluster may use a directly-connected  
    interconnect. That is, no cluster transport junctions are configured.  
    However, when there are greater than two nodes, this interactive form.  
    of scinstall assumes that there will be exactly two cluster transport  
    junctions. 

    Does this two-node cluster use transport junctions (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Cluster Transport Junctions <<

    All cluster transport adapters in this cluster must be cabled to a  
    transport junction, or "switch". And, each adapter on a given node  
    must be cabled to a different junction. Interactive scinstall  
    requires that you identify two switches for use in the cluster and  
    the two transport adapters on each node to which they are cabled. 

    What is the name of the first junction in the cluster [switch1]?   

    What is the name of the second junction in the cluster [switch2]?   

_[H_[J_[H_[J 
  >>> Cluster Transport Adapters and Cables <<

    You must configure at least two cluster transport adapters for each  
    node in the cluster. These are the adapters which attach to the  
    private cluster interconnect. 

    Select the first cluster transport adapter: 

        1) ce0 
        2) ce1 
        3) ce2 
        4) ce3 
        5) ge0 
        6) Other 

    Option:  1 

    Adapter "ce0" is an Ethernet adapter. 

    Searching for any unexpected network traffic on "ce0" ... done 
    Verification completed. No traffic was detected over a 10 second  
    sample period. 

    The "dlpi" transport type will be set for this cluster. 

    Name of the junction to which "ce0" is connected [switch1]?   

    Each adapter is cabled to a particular port on a transport junction.  
    And, each port is assigned a name. You can explicitly assign a name  
    to each port. Or, for Ethernet switches, you can choose to allow  
    scinstall to assign a default name for you. The default port name  
    assignment sets the name to the node number of the node hosting the  
    transport adapter at the other end of the cable. 

    For more information regarding port naming requirements, refer to the  
    scconf_transp_jct family of man pages (e.g.,  
    scconf_transp_jct_dolphinswitch(1M)). 

    Use the default port name for the "ce0" connection (yes/no) [yes]?   

    Select the second cluster transport adapter: 

        1) ce0 
        2) ce1 
        3) ce2 
        4) ce3 
        5) ge0 
        6) Other 

    Option:  2 

    Adapter "ce1" is an Ethernet adapter. 

    Searching for any unexpected network traffic on "ce1" ... done 
    Verification completed. No traffic was detected over a 10 second  
    sample period. 

    Name of the junction to which "ce1" is connected [switch2]?   

    Use the default port name for the "ce1" connection (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Global Devices File System <<

    Each node in the cluster must have a local file system mounted on  
    /global/.devices/node@ before it can successfully participate  
    as a cluster member. Since the "nodeID" is not assigned until  
    scinstall is run, scinstall will set this up for you. 

    You must supply the name of either an already-mounted file system or  
    raw disk partition which scinstall can use to create the global  
    devices file system. This file system or partition should be at least  
    512 MB in size. 

    If an already-mounted file system is used, the file system must be  
    empty. If a raw disk partition is used, a new file system will be  
    created for you. 

    The default is to use /globaldevices. 

    Is it okay to use this default (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Automatic Reboot <<

    Once scinstall has successfully installed and initialized the Sun  
    Cluster software for this machine, it will be necessary to reboot.  
    After the reboot, this machine will be established as the first node  
    in the new cluster. 

    Do you want scinstall to reboot for you (yes/no) [yes]?   

_[H_[J_[H_[J 
  >>> Confirmation <<

    Your responses indicate the following options to scinstall: 

      scinstall -ik \  
           -C test \  
           -F \  
           -M patchdir=/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages \  
           -T node=sys-1,node=sys-2,authtype=sys \  
           -A trtype=dlpi,name=ce0 -A trtype=dlpi,name=ce1 \  
           -B type=switch,name=switch1 -B type=switch,name=switch2 \  
           -m endpoint=:ce0,endpoint=switch1 \  
           -m endpoint=:ce1,endpoint=switch2 \  
            

    Are these the options you want to use (yes/no) [yes]?   

    Do you want to continue with the install (yes/no) [yes]?   


Checking device to use for global devices file system ... done 
Installing patches ... failed 

scinstall:  Problems detected during extraction or installation of patches. 


Initializing cluster name to "xxml" ... done 
Initializing authentication options ... done 
Initializing configuration for adapter "ce0" ... done 
Initializing configuration for adapter "ce1" ... done 
Initializing configuration for junction "switch1" ... done 
Initializing configuration for junction "switch2" ... done 
Initializing configuration for cable ... done 
Initializing configuration for cable ... done 


Setting the node ID for "sys-1" ... done (id=1) 

Setting the major number for the "did" driver ... done

[ 本帖最后由 阿毛~ 于 2006-4-25 16:24 编辑 ]

 阿毛~ 回复于:2006-04-25 16:15:52

"did" driver major number set to 300 

Checking for global devices global file system ... done 
Updating vfstab ... done 

Verifying that NTP is configured ... done 
Installing a default NTP configuration ... done 
Please complete the NTP configuration after scinstall has finished. 

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done 
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done 

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done 
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done 

Verifying that power management is NOT configured ... done 
Unconfiguring power management ... done 
/etc/power.conf has been renamed to /etc/power.conf.042006154016 
Power management is incompatible with the HA goals of the cluster. 
Please do not attempt to re-configure power management. 

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done 

Ensure network routing is disabled ... done 
Network routing has been disabled on this node by creating /etc/notrouter. 
Having a cluster node act as a router is not supported by Sun Cluster. 
Please do not re-enable network routing. 

Log file - /var/cluster/logs/install/scinstall.log.2140 
将sys-2作为第二节点加入到cluster中 
#cd/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools 
#./scinstall 
*** Main Menu *** 

    Please select from one of the following (*) options: 

      * 1) Install a cluster or cluster node 
        2) Configure a cluster to be JumpStarted from this install server 
        3) Add support for new data services to this cluster node 
      * 4) Print release information for this cluster node 

      * ?) Help with menu options 
      * q) Quit 

Option:  1 
*** Install Menu *** 

    Please select from any one of the following options: 

        1) Install all nodes of a new cluster 
        2) Install just this machine as the first node of a new cluster 
        3) Add this machine as a node in an existing cluster 

        ?) Help with menu options 
        q) Return to the Main Menu 

    Option:  3 
*** Adding a Node to an Existing Cluster *** 


    This option is used to add this machine as a node in an already  
    established cluster. If this is an initial cluster install, there may  
    only be a single node which has established itself in the new cluster. 

    Once the cluster framework software is installed, you will be asked  
    to provide both the name of the cluster and the name of one of the  
    nodes already in the cluster. Then, sccheck(1M) is run to test this  
    machine for basic Sun Cluster pre-configuration requirements. 

    After sccheck(1M) passes, you may be asked to provide certain cluster  
    transport configuration information. 

    Press Control-d at any time to return to the Main Menu. 


    Do you want to continue (yes/no) [yes]?   
  >>> Software Patch Installation <<

    If there are any Solaris or Sun Cluster patches that need to be added  
    as part of this Sun Cluster installation, scinstall can add them for  
    you. All patches that need to be added must first be downloaded into  
    a common patch directory. Patches can be downloaded into the patch  
    directory either as individual patches or as patches grouped together  
    into one or more tar, jar, or zip files. 

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to install patches for you (yes/no) [yes]?   

    What is the name of the patch directory [/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages]?   

    If a patch list file is provided in the patch directory, only those  
    patches listed in the patch list file are installed. Otherwise, all  
    patches found in the directory will be installed. Refer to the  
    patchadd(1M) man page for more information regarding patch list files. 

    Do you want scinstall to use a patch list file (yes/no) [no]? 
  >>> Sponsoring Node <<

    For any machine to join a cluster, it must identify a node in that  
    cluster willing to "sponsor" its membership in the cluster. When  
    configuring a new cluster, this "sponsor" node is typically the first  
    node used to build the new cluster. However, if the cluster is  
    already established, the "sponsoring" node can be any node in that  
    cluster. 

    Already established clusters can keep a list of hosts which are able  
    to configure themselves as new cluster members. This machine should  
    be in the join list of any cluster which it tries to join. If the  
    list does not include this machine, you may need to add it using  
    scconf(1M) or other tools. 

    And, if the target cluster uses DES to authenticate new machines  
    attempting to configure themselves as new cluster members, the  
    necessary encryption keys must be configured before any attempt to  
    join. 

    What is the name of the sponsoring node [sys-1]? 
>>> Cluster Name <<

    Each cluster has a name assigned to it. When adding a node to the  
    cluster, you must identify the name of the cluster you are attempting  
    to join. A sanity check is performed to verify that the "sponsoring"  
    node is a member of that cluster. 

    What is the name of the cluster you want to join [test]?   

    Attempting to contact "sys-1" ... done 

    Cluster name "xxml" is correct. 
     
Press Enter to continue: 
>>> Check <<

    This step allows you to run sccheck(1M) to verify that certain basic  
    hardware and software pre-configuration requirements have been met.  
    If sccheck(1M) detects potential problems with configuring this  
    machine as a cluster node, a report of failed checks is prepared and  
    available for display on the screen. Data gathering and report  
    generation can take several minutes, depending on system  
    configuration. 

    Do you want to run sccheck (yes/no) [yes]?  No 
  >>> Autodiscovery of Cluster Transport <<

    If you are using Ethernet adapters as your cluster transport  
    adapters, autodiscovery is the best method for configuring the  
    cluster transport. 

    However, it appears that scinstall has already been run at least once  
    before on this machine. You can either attempt to autodiscover or  
    continue with the answers that you gave the last time you ran  
    scinstall. 

    Do you want to use autodiscovery anyway (yes/no) [no]?  yes 
    Probing ..................... 

    The following connection was discovered: 

        sys-1:ce1  switch2  sys-2:ce1 

    Probes were sent out from all transport adapters configured for  
    cluster node "sys-1". But, they were only received on one of the  
    network adapters on this machine ("sys-2"). This may be due to  
    any number of reasons, including improper cabling, an improper  
    configuration for "sys-1", or a switch which was confused by the  
    probes. 

    You can either attempt to correct the problem and try the probes  
    again or try to manually configure the transport. Correcting the  
    problem may involve re-cabling, changing the configuration for  
    "sys-1", or fixing hardware. 

    Do you want to try again (yes/no) [yes]?  no 

 [H [J [H [J 
  >>> Point-to-Point Cables <<

    The two nodes of a two-node cluster may use a directly-connected  
    interconnect. That is, no cluster transport junctions are configured.  
    However, when there are greater than two nodes, this interactive form.  
    of scinstall assumes that there will be exactly two cluster transport  
    junctions. 

    Is this a two-node cluster (yes/no) [yes]?   

    Does this two-node cluster use transport junctions (yes/no) [yes]?   

 [H [J [H [J 
  >>> Cluster Transport Junctions <<

    All cluster transport adapters in this cluster must be cabled to a  
    transport junction, or "switch". And, each adapter on a given node  
    must be cabled to a different junction. Interactive scinstall  
    requires that you identify two switches for use in the cluster and  
    the two transport adapters on each node to which they are cabled. 

    What is the name of the first junction in the cluster [switch1]?   

    What is the name of the second junction in the cluster [switch2]?   

 [H [J [H [J 
  >>> Cluster Transport Adapters and Cables <<

    You must configure at least two cluster transport adapters for each  
    node in the cluster. These are the adapters which attach to the  
    private cluster interconnect. 

    What is the name of the first cluster transport adapter (help) [ce0]?   

    Adapter "ce0" is an Ethernet adapter. 

    The "dlpi" transport type will be set for this cluster. 

    Name of the junction to which "ce0" is connected [switch1]?   

    Each adapter is cabled to a particular port on a transport junction.  
    And, each port is assigned a name. You can explicitly assign a name  
    to each port. Or, for Ethernet switches, you can choose to allow  
    scinstall to assign a default name for you. The default port name  
    assignment sets the name to the node number of the node hosting the  
    transport adapter at the other end of the cable. 

    For more information regarding port naming requirements, refer to the  
    scconf_transp_jct family of man pages (e.g.,  
    scconf_transp_jct_dolphinswitch(1M)). 

    Use the default port name for the "ce0" connection (yes/no) [yes]?   

    What is the name of the second cluster transport adapter (help) [ce1]?   

    Adapter "ce1" is an Ethernet adapter. 

    Name of the junction to which "ce1" is connected [switch2]?   

    Use the default port name for the "ce1" connection (yes/no) [yes]?   

 [H [J [H [J 
  >>> Global Devices File System <<

    Each node in the cluster must have a local file system mounted on  
    /global/.devices/node@ before it can successfully participate  
    as a cluster member. Since the "nodeID" is not assigned until  
    scinstall is run, scinstall will set this up for you. 

    You must supply the name of either an already-mounted file system or  
    raw disk partition which scinstall can use to create the global  
    devices file system. This file system or partition should be at least  
    512 MB in size. 

    If an already-mounted file system is used, the file system must be  
    empty. If a raw disk partition is used, a new file system will be  
    created for you. 

    The default is to use /globaldevices. 

    Is it okay to use this default (yes/no) [yes]?   

 [H [J [H [J 
  >>> Automatic Reboot <<

    Once scinstall has successfully installed and initialized the Sun  
    Cluster software for this machine, it will be necessary to reboot.  
    The reboot will cause this machine to join the cluster for the first  
    time. 

    Do you want scinstall to reboot for you (yes/no) [yes]?   

 [H [J [H [J 
  >>> Confirmation <<

    Your responses indicate the following options to scinstall: 

      scinstall -ik \  
           -C xxml \  
           -N sjz-xxml-1 \  
           -M patchdir=/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages \  
           -A trtype=dlpi,name=ce0 -A trtype=dlpi,name=ce1 \  
           -m endpoint=:ce0,endpoint=switch1 \  
           -m endpoint=:ce1,endpoint=switch2 \  
            

    Are these the options you want to use (yes/no) [yes]?   

    Do you want to continue with the install (yes/no) [yes]?   


Checking device to use for global devices file system ... done 
Installing patches ... failed 

scinstall:  Problems detected during extraction or installation of patches. 


Adding node "sys-2" to the cluster configuration ... done 
Adding adapter "ce0" to the cluster configuration ... done 
Adding adapter "ce1" to the cluster configuration ... done 
Adding cable to the cluster configuration ... done 
Adding cable to the cluster configuration ... done 

Copying the config from "sys-1" ... done 
Copying the cacao keys from "sys-1" ... done 


Setting the node ID for "sys-2" ... done (id=2) 

Setting the major number for the "did" driver ...  
Obtaining the major number for the "did" driver from "sys-1" ... done 
"did" driver major number set to 300 

Checking for global devices global file system ... done 
Updating vfstab ... done 

Verifying that NTP is configured ... done 
Installing a default NTP configuration ... done 
Please complete the NTP configuration after scinstall has finished. 

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done 
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done 

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done 
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done 

Verifying that power management is NOT configured ... done 
Unconfiguring power management ... done 
/etc/power.conf has been renamed to /etc/power.conf.042206104133 
Power management is incompatible with the HA goals of the cluster. 
Please do not attempt to re-configure power management. 

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...

[ 本帖最后由 阿毛~ 于 2006-4-25 16:27 编辑 ]

 阿毛~ 回复于:2006-04-25 16:16:59

八、 建立共享磁盘集 
注:一下只在sys-1上操作 
查看DID设备 
   # scdidadm -L 
1        sys-1:/dev/rdsk/c0t0d0    /dev/did/rdsk/d1      
2        sys-1:/dev/rdsk/c1t2d0    /dev/did/rdsk/d2      
3        sys-1:/dev/rdsk/c1t0d0    /dev/did/rdsk/d3      
4        sys-1:/dev/rdsk/c1t5d0    /dev/did/rdsk/d4      
5        sys-1:/dev/rdsk/c1t1d0    /dev/did/rdsk/d5      
6        sys-1:/dev/rdsk/c1t4d0    /dev/did/rdsk/d6      
7        sys-1:/dev/rdsk/c1t3d0    /dev/did/rdsk/d7      
8        sys-1:/dev/rdsk/c3t5006016930226EF3d0 /dev/did/rdsk/d8      
8        sys-1:/dev/rdsk/c3t5006016030226EF3d0 /dev/did/rdsk/d8      
8        sys-1:/dev/rdsk/c2t5006016830226EF3d0 /dev/did/rdsk/d8      
8        sys-1:/dev/rdsk/c2t5006016130226EF3d0 /dev/did/rdsk/d8      
9        sys-1:/dev/rdsk/c2t5006016830226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c2t5006016130226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c3t5006016930226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c3t5006016030226EF3d53 /dev/did/rdsk/d9      
9        sys-1:/dev/rdsk/c4t60060160478D1900F658B0B052D0DA11d0 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c2t5006016830226EF3d53 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c2t5006016130226EF3d53 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c3t5006016030226EF3d53 /dev/did/rdsk/d9      
9        sys-2:/dev/rdsk/c3t5006016930226EF3d53 /dev/did/rdsk/d9      
10       sys-1:/dev/rdsk/c2t5006016830226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c2t5006016130226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c3t5006016930226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c3t5006016030226EF3d52 /dev/did/rdsk/d10     
10       sys-1:/dev/rdsk/c4t60060160478D19001269AABC52D0DA11d0 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c2t5006016830226EF3d52 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c2t5006016130226EF3d52 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c3t5006016030226EF3d52 /dev/did/rdsk/d10     
10       sys-2:/dev/rdsk/c3t5006016930226EF3d52 /dev/did/rdsk/d10     
11       sys-1:/dev/rdsk/c2t5006016830226EF3d51 /dev/did/rdsk/d11     
11      sys-1:/dev/rdsk/c2t5006016130226EF3d51 /dev/did/rdsk/d11     
11       sys-1:/dev/rdsk/c3t5006016930226EF3d51 /dev/did/rdsk/d11     
11       sys-1:/dev/rdsk/c3t5006016030226EF3d51 /dev/did/rdsk/d11     
11       sys-1:/dev/rdsk/c4t60060160478D1900F0377FC752D0DA11d0 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c2t5006016830226EF3d51 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c2t5006016130226EF3d51 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c3t5006016030226EF3d51 /dev/did/rdsk/d11     
11       sys-2:/dev/rdsk/c3t5006016930226EF3d51 /dev/did/rdsk/d11     
12       sys-1:/dev/rdsk/c2t5006016830226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c2t5006016130226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c3t5006016930226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c3t5006016030226EF3d50 /dev/did/rdsk/d12     
12       sys-1:/dev/rdsk/c4t60060160478D190082D97FD952D0DA11d0 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c2t5006016830226EF3d50 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c2t5006016130226EF3d50 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c3t5006016030226EF3d50 /dev/did/rdsk/d12     
12       sys-2:/dev/rdsk/c3t5006016930226EF3d50 /dev/did/rdsk/d12

[ 本帖最后由 阿毛~ 于 2006-4-25 16:30 编辑 ]

 阿毛~ 回复于:2006-04-25 16:18:05

建立metaset磁盘组 
#metadb -a -f -c 3 c1t0d0s7 (同时在sys-2上操作) 
# metadb 
        flags           first blk       block count 
     a        u         16              8192            /dev/dsk/c1t0d0s7 
     a        u         8208            8192            /dev/dsk/c1t0d0s7 
     a        u         16400           8192            /dev/dsk/c1t0d0s7 
# metaset -s oraset -a -h sys-1 sys-2 
# metaset -s oraset -t 
# metaset 

Set name = oraset, Set number = 1 

Host                Owner 
        sys-1         yes 
        sys-2          
将共享DID设备加入到oraset组中 
#metaset -s oraset -a /dev/did/rdsk/d9 /dev/did/rdsk/d10 \ 
 /dev/did/rdsk/d11 /dev/did/rdsk/d12 /dev/did/rdsk/d13 \ 
/dev/did/rdsk/d14 /dev/did/rdsk/d15 /dev/did/rdsk/d16 \ 
/dev/did/rdsk/d17 /dev/did/rdsk/d18 /dev/did/rdsk/d19 \ 
/dev/did/rdsk/d20 /dev/did/rdsk/d21 /dev/did/rdsk/d22 \ 
/dev/did/rdsk/d23 /dev/did/rdsk/d24 /dev/did/rdsk/d25 \ 
/dev/did/rdsk/d26 /dev/did/rdsk/d27 /dev/did/rdsk/d28 \ 
/dev/did/rdsk/d29 /dev/did/rdsk/d30 /dev/did/rdsk/d31 \ 
/dev/did/rdsk/d32 /dev/did/rdsk/d33 /dev/did/rdsk/d34 \ 
/dev/did/rdsk/d35 /dev/did/rdsk/d36 /dev/did/rdsk/d37 \ 
/dev/did/rdsk/d38 /dev/did/rdsk/d39 /dev/did/rdsk/d40 \ 
/dev/did/rdsk/d41 /dev/did/rdsk/d42 /dev/did/rdsk/d43 \ 
/dev/did/rdsk/d44 /dev/did/rdsk/d45 /dev/did/rdsk/d46 \ 
/dev/did/rdsk/d47 /dev/did/rdsk/d48 /dev/did/rdsk/d49 \ 
/dev/did/rdsk/d50 /dev/did/rdsk/d51 /dev/did/rdsk/d52 \ 
/dev/did/rdsk/d53 /dev/did/rdsk/d54 /dev/did/rdsk/d55 \ 
/dev/did/rdsk/d56 /dev/did/rdsk/d57 /dev/did/rdsk/d58 \ 
/dev/did/rdsk/d59 /dev/did/rdsk/d60 /dev/did/rdsk/d61 \ 
/dev/did/rdsk/d62 
13块盘建立Raid0 Concatenation 
# metainit oraset/d110 13 1 /dev/did/rdsk/d9s0 1 \ 
/dev/did/rdsk/d10s0 1 /dev/did/rdsk/d11s0 1 \ 
/dev/did/rdsk/d12s0 1 /dev/did/rdsk/d13s0 1 \ 
/dev/did/rdsk/d14s0 1 /dev/did/rdsk/d15s0 1 \ 
/dev/did/rdsk/d16s0 1 /dev/did/rdsk/d17s0 1 \ 
/dev/did/rdsk/d18s0 1 /dev/did/rdsk/d19s0 1 \ 
/dev/did/rdsk/d20s0 1 /dev/did/rdsk/d21s0 
oraset/d110: Concat/Stripe is setup 
注:Concatenation和stripe区别 
     RAID 0是把多个硬盘空间组织在一起形成一个大的逻辑盘。Concatenation方式相当把多个盘空间一个一个一次串接,stripe方式是把每个盘空间划分为一条条的,然后按条(不论该条在哪个盘上)将空间重新组织成一个大的逻辑盘。 
     在使用上,前一种方式相当于一个物理盘存满后才用下一个物理盘;后一种方式相当于可以同时往存在于几个不同物理盘上的条上读写。因此后一种方式在I/O上性能更好。 
在oraset/d110上建立软分区存放oracle数据库文件 
    # metainit oraset/d111 -p d110 50m 
oraset/d111: Soft Partition is setup 
# metainit oraset/d112 -p d110 50m 
oraset/d112: Soft Partition is setup 
# metainit oraset/d113 -p d110 50m 
oraset/d113: Soft Partition is setup 
# metainit oraset/d114 -p d110 1024m 
oraset/d114: Soft Partition is setup 
# metainit oraset/d115 -p d110 1024m 
oraset/d115: Soft Partition is setup 
# metainit oraset/d116 -p d110 1024m 
oraset/d116: Soft Partition is setup 
# metainit oraset/d117 -p d110 1024m 
oraset/d117: Soft Partition is setup 
# metainit oraset/d118 -p d110 1024m 
oraset/d118: Soft Partition is setup 
# metainit oraset/d119 -p d110 2048m 
oraset/d119: Soft Partition is setup 
# metainit oraset/d120 -p d110 2048m 
oraset/d120: Soft Partition is setup 
# metainit oraset/d121 -p d110 2048m 
oraset/d121: Soft Partition is setup 
# metainit oraset/d122 -p d110 8192m 
oraset/d122: Soft Partition is setup 
改变新建裸设备宿主 
    # chown oracle /dev/md/oraset/rdsk/d* 
# chgrp dba /dev/md/oraset/rdsk/d* 
# chmod 600 /dev/md/oraset/rdsk/d* 
# ls -lL /dev/md/oraset/rdsk/d1* 
注:以上需在两台主机上分别操作 
   
九、 安装oracle 10G软件 
获取oracle 10G(10.2.0.1.0)介质 
设置oracle安装环境 
注:以下需在两台主机上分别操作 
●创建安装必需的组和用户 
#groupadd oinstall 
#groupadd dba 
#useradd -d /export/home/oracle -g oinstall -G dba -m oracle 
#passwd oracle 
●创建安装目录  
#mkdir /oracle 
#mkdir /oracle/oradata 
#chown -R oracle:oinstall /oracle/oradata 
#chmod 755 /oracle/oradata 
●设置oracle用户环境变量  
#su - oracle 
#vi .profile 
加入如下内容: 
This is the default standard profile provided to a user. 
# They are expected to edit it to meet their own needs. 

MAIL=/usr/mail/${LOGNAME:?} 

umask 022 
ORACLE_BASE=/oracle;export ORACLE_BASE 
ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 
export ORACLE_HOME

[ 本帖最后由 阿毛~ 于 2006-4-25 16:31 编辑 ]

 阿毛~ 回复于:2006-04-25 16:18:47

ORACLE_SID=orcl;export ORACLE_SID 
PATH=$ORACLE_HOME/bin:/usr/bin:/usr/ucb:/etc:/usr/openwin/bin:/usr/ccs/bin 
export PATH 
安装oracle软件 
注:只在sys-1上操作,安装过程中不进行数据库的创建 
#cd /cdrom/cdrom0 
#./runInstaller 
安装过程省略……… 
十、 创建oracle数据库所需文件的链接至裸设备 
注:只在sys-1上操作 
# su - oracle 
$ mkdir -p /oracle/oradata/orcl 
$ cd /oracle/oradata/orcl 
$ ln -s /dev/md/oraset/rdsk/d111 control01.ctl 
$ ln -s /dev/md/oraset/rdsk/d112 control02.ctl 
$ ln -s /dev/md/oraset/rdsk/d113 control03.ctl 
$ ln -s /dev/md/oraset/rdsk/d114 sysaux01.dbf 
$ ln -s /dev/md/oraset/rdsk/d115 system01.dbf 
$ ln -s /dev/md/oraset/rdsk/d116 temp01.dbf 
$ ln -s /dev/md/oraset/rdsk/d117 undotbs01.dbf 
$ ln -s /dev/md/oraset/rdsk/d118 users01.dbf 
$ ln -s /dev/md/oraset/rdsk/d119 redo01.log 
$ ln -s /dev/md/oraset/rdsk/d120 redo02.log 
$ ln -s /dev/md/oraset/rdsk/d121 redo03.log 
$ mkdir flash_recovery_area 
$ ln -s /dev/md/oraset/rdsk/d122 flash_recovery_area 
十一、创建oracle数据库 
    注:只在sys-1上操作 
图形化登陆sys-1,运行dbca创建数据库 
#cd /oracle/p*/*/*/bin 
#./dbca 
  

点击NEXT 
  
    
点击Next 
      

点击Next 
     
  

输入Dabase Name orcl 
    SID          orcl 
点击Next 

  

点击Next 

  

输入The Same Password For All Accounts oracle 点击Next 

  

选择Storage Options Raw Devices(裸设备)点击Next 

  

Flash Recovery Area {ORACLE_BASE}/oradata/flash_recovery_area 
Flash Recovery Size 4096MB 
点击Next 

  

选择如图,点击Next 

  
File name                          File Directory 
control01.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
control02.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
control03.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
system01.dbf               {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
undotbs01.dbf              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
sysaux01.dbf               {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
users01.dbf                {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
redo01.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
redo02.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
redo03.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/ 
点击Next 

  

点击Finish 
  

点击OK 

  

创建完成exit退出 



十二、创建监听 
注:只在sys-1上操作 
# cd /oracle/p*/*/*/bin 
# netca 
  
点击NEXT 
  
点击 Next 
  
完成创建

[ 本帖最后由 阿毛~ 于 2006-4-25 16:32 编辑 ]

 阿毛~ 回复于:2006-04-25 16:19:48

十三、启动数据库,并将sys-1上的oracle软件目录打包后ftp至sys-2上,并解压 

①在sys-1上启动数据库测试 
# su - oracle 
$ sqlplus “as sysdba” 
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Apr 23 14:14:12 2006 

Copyright (c) 1982, 2005, Oracle.  All rights reserved. 

Connected to an idle instance. 

SQL> startup 
ORACLE instance started. 

Total System Global Area 4294967296 bytes 
Fixed Size                  1984144 bytes 
Variable Size             805312880 bytes 
Database Buffers         3472883712 bytes 
Redo Buffers               14786560 bytes 
Database mounted. 
Database opened. 
SQL> shutdown immediate; 
Database closed. 
Database dismounted. 
ORACLE instance shut down. 
SQL>exit 
创建数据库连接用户,并赋予权限 
    SQL> create user oracle identified by oracle; 

User created. 

SQL> grant connect, resource to oracle; 

Grant succeeded. 

SQL> 
将sys-1上的oracle软件目录打包后ftp至sys-2上,并解压 
启动数据库进行测试 
# su - oracle 
$ sqlplus “as sysdba” 
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Apr 23 14:14:12 2006 

Copyright (c) 1982, 2005, Oracle.  All rights reserved. 

Connected to an idle instance. 

SQL> startup 
ORACLE instance started. 

Total System Global Area 4294967296 bytes 
Fixed Size                  1984144 bytes 
Variable Size             805312880 bytes 
Database Buffers         3472883712 bytes 
Redo Buffers               14786560 bytes 
Database mounted. 
Database opened. 
SQL> shutdown immediate; 
Database closed. 
Database dismounted. 
ORACLE instance shut down. 
SQL>exit 
修改/oracle/product/10.2.0.0.1/db_1/network/admin/listener.ora 
        /oracle/product/10.2.0.0.1/db_1/network/admin/tnsnames.ora 
注:两台机器上同时操作 
将sys-1替换为192.168.22.17 
   
十四、添加oracle Agent 
     注:两台主机分别添加 
     # ./scinstall 


  *** Main Menu *** 

    Please select from one of the following (*) options: 

        1) Install a cluster or cluster node 
        2) Configure a cluster to be JumpStarted from this install server 
      * 3) Add support for new data services to this cluster node 
      * 4) Print release information for this cluster node 

      * ?) Help with menu options 
      * q) Quit 

    Option:   

*** Adding Data Service Software *** 


    This option is used to install data services software. 

    Where is the data services CD [/cdrom/cdrom0]?  /export/soft/sc-agents-3_1_904-sparc 

    Select the data services you want to install: 

           Identifier     Description                                        

        1) pax            Sun Cluster HA for AGFA IMPAX 
        2) tomcat         Sun Cluster HA for Apache Tomcat 
        3) apache         Sun Cluster HA for Apache 
        4) wls            Sun Cluster HA for BEA WebLogic Server 
        5) dhcp           Sun Cluster HA for DHCP 
        6) dns            Sun Cluster HA for DNS 
        7) ebs            Sun Cluster HA for Oracle E-Business Suite 
        8) mqi            Sun Cluster HA for WebSphere MQ Integrator 
        9) mqs            Sun Cluster HA for WebSphere MQ 
       10) mys            Sun Cluster HA for MySQL 

        n) Next > 
        q) Done 

    Option(s):  n 

    Select the data services you want to install: 

           Identifier     Description                                        

       11) sps            Sun Cluster HA for N1 Grid Service Provisioning 
       12) nfs            Sun Cluster HA for NFS 
       13) netbackup      Sun Cluster HA for NetBackup 
       14) 9ias           Sun Cluster HA for Oracle9i Application Server 
       15) oracle         Sun Cluster HA for Oracle 
       16) sapdb          Sun Cluster HA for SAPDB 
       17) sapwebas       Sun Cluster HA for SAP Web Application Server 
       18) sap            Sun Cluster HA for SAP 
       19) livecache      Sun Cluster HA for SAP liveCache 
       20) sge            Sun Cluster HA for Sun Grid Engine 

        p) < Previous 
        n) Next > 
        q) Done 

    Option(s):  15 
     Selected:  15 

    Option(s):  q 


    This is the complete list of data services you selected: 

        oracle 

    Is it correct (yes/no) [yes]?   

    Is it okay to add the software for this data service [yes]   


scinstall -ik -s oracle -d /export/soft/sc-agents-3_1_904-sparc 


** Installing Sun Cluster HA for Oracle ** 
        SUNWscor....done 
        SUNWcscor...done 
        SUNWjscor...done 

     
Press Enter to continue:  s 


十五、创建Quorum devices 
      注:只在sys-1上操作 
     # scconf -a -q globaldev=d9 
     # scconf -a -q globaldev=d10 
     # scconf -c -q reset

[ 本帖最后由 阿毛~ 于 2006-4-25 16:33 编辑 ]

 阿毛~ 回复于:2006-04-25 16:20:19

十六、创建资源组并添加资源 
     注:只在sys-1上操作 
注册资源类型 
         # scrgadm -a -t SUNW.oracle_server 
# scrgadm -a -t SUNW.oracle_listener 
创建空资源组 
scrgadm -a -g orareg 
增加IP资源Create a LogicalHostname resource 
   # scrgadm -a -L -g orareg -l oracle 
增加存储资源Create an HAStoragePlus resource 
# scrgadm -a -j oradata -g orareg -t SUNW.HAStoragePlus \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d112 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d113 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d114 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d115 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d116 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d117 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d118 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d119 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d120 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d121 \ 
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d122 
增加应用资源 oracle_server Create an oracle_server resource 
# scrgadm -a -j oraser -g orareg \ 
-t SUNW.oracle_server \ 
-x ORACLE_SID=orcl \ 
-x ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 \ 
-x Alert_log_file=/oracle/admin/orcl/bdump/alert_orcl.log \ 
-x Parameter_file=/oracle/admin/orcl/pfile/init.ora \ 
-x Connect_string=oracle/oracle 

增加资源Oracle_listener Create an oracle_listener resource by entering 
# scrgadm -a -j oralistener -g orareg -t oracle_listener \ 
-x ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 \ 
-x LISTENER_NAME=LISTENER 
启用资源组Bring the resource group online 
# scswitch -Z -g orareg 
查看cluster状态 
    # scstat 
------------------------------------------------------------------ 

-- Cluster Nodes -- 

                    Node name           Status 
                    ---------           ------ 
  Cluster node:     sys-1          Online 
  Cluster node:     sys-2          Online 

------------------------------------------------------------------ 

-- Cluster Transport Paths -- 

                    Endpoint            Endpoint            Status 
                    --------            --------            ------ 
  Transport path:   sys-1:ce1      sys-2:ce1      Path online 
  Transport path:   sys-1:ce0      sys-2:ce0      Path online 

------------------------------------------------------------------ 

-- Quorum Summary -- 

  Quorum votes possible:      4 
  Quorum votes needed:        3 
  Quorum votes present:       4 


-- Quorum Votes by Node -- 

                    Node Name           Present Possible Status 
                    ---------           ------- -------- ------ 
  Node votes:       sys-1          1        1       Online 
  Node votes:       sys-2          1        1       Online 


-- Quorum Votes by Device -- 

                    Device Name         Present Possible Status 
                    -----------         ------- -------- ------ 
  Device votes:     /dev/did/rdsk/d9s2  1        1       Online 
  Device votes:     /dev/did/rdsk/d10s2 1        1       Online 

------------------------------------------------------------------ 

-- Device Group Servers -- 

                         Device Group        Primary             Secondary 
                         ------------        -------             --------- 
  Device group servers:  oraset              sys-1          sys-2 

-- Device Group Status -- 

                              Device Group        Status               
                              ------------        ------               
  Device group status:        oraset              Online 


-- Multi-owner Device Groups -- 

                              Device Group        Online Status 
                              ------------        ------------- 

------------------------------------------------------------------ 

-- Resource Groups and Resources -- 

            Group Name          Resources 
            ----------          --------- 
 Resources: orareg              oracle oradata oraser oralistener 


-- Resource Groups -- 

            Group Name          Node Name           State 
            ----------          ---------           ----- 
     Group: orareg              sys-1          Online 
     Group: orareg              sys-2          Offline 


-- Resources -- 

            Resource Name       Node Name           State     Status Message 
            -------------       ---------           -----     -------------- 
  Resource: oracle              sys-1          Online    Online - LogicalHostname online. 
  Resource: oracle              sys-2          Offline   Offline - LogicalHostname offline. 

  Resource: oradata             sys-1          Online    Online 
  Resource: oradata             sys-2          Offline   Offline 

  Resource: oraser              sys-1          Online    Online 
  Resource: oraser              sys-2          Offline   Offline 

  Resource: oralistener         sys-1          Online    Online 
  Resource: oralistener         sys-2          Offline   Offline 

------------------------------------------------------------------ 

-- IPMP Groups -- 

              Node Name           Group   Status         Adapter   Status 
              ---------           -----   ------         -------   ------ 
  IPMP Group: sys-1          xxml    Online         eri0      Online 
  IPMP Group: sys-1          xxml    Online         ce3       Standby 

  IPMP Group: sys-2          xxml    Online         eri0      Online 
  IPMP Group: sys-2          xxml    Online         ce3       Standby

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/22531473/viewspace-671481/,如需转载,请注明出处,否则将追究法律责任。

上一篇: 没有了~
user_pic_default.png
请登录后发表评论 登录
全部评论
<%=items[i].createtime%>

<%=items[i].content%>

<%if(items[i].items.items.length) { %>
<%for(var j=0;j
<%=items[i].items.items[j].createtime%> 回复

<%=items[i].items.items[j].username%>   回复   <%=items[i].items.items[j].tousername%><%=items[i].items.items[j].content%>

<%}%> <%if(items[i].items.total > 5) { %>
还有<%=items[i].items.total-5%>条评论 ) data-count=1 data-flag=true>点击查看
<%}%>
<%}%>
<%}%>

转载于:http://blog.itpub.net/22531473/viewspace-671481/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值