ADD DISK TO ASM(10g RAC ,HP+UX OS)

15 篇文章 0 订阅
5 篇文章 0 订阅

                                                                                                 ADD DISK TO ASM

Time:2011-12-31 

 元旦假期生产系统计划性停机维护,为一三节点RAC asm添加磁盘,不能不说HPUX系统真不好用

AIX下半小时就搞定的事情,在HPUX下搞了一个上午才搞定。麻烦!

environment:

HPUX11.31+ oracle 10.0.2.4  3 nodes RAC (storageHP-6100 FC disk)

 

1. cut 2*300G RAID5 FC EVA disk to all nodes ofsfc12rc rac

 

----------------------------------------------------------------------

2. os list and find new disk on all nodes:

   ioscan -N -fCdisk

   ioscan -N -m lun

   ioscan -fnCdisk

   smh to check disk properties: SMH->Disksand File Systems->Disks->Details

 

VD name             UUID                                      LUN       sfc12rc1  sfc12rc2  sfc12rc3

----------------------------------------------------------------------------------------------------------------

VD_12_ORA_DATA_13   6001-4380-02a5-7554-0001-0000-0096-0000       18   disk98              disk97              disk99

VD_12_ORA_DATA_14   6001-4380-02a5-7554-0001-0000-0099-0000       19   disk103  disk103            disk104   

 

 

----------------------------------------------------------------------

3. stop oracle cluster and OS cluster

  

   sqlplus to close database from sfc12rc1 tosfc12rc3

   --sqlplus / as sysdba   shutdownimmediate on each node.

   stop all oracle cluster service:

   crs_stop -all

   crsctl stop crs

 

   stop OS cluster:

   cmviewcl

   cmhaltcl -f

   cmviewcl

 

 

sfc12rc1:/# crsctl check crs

CSSappears healthy

CRSappears healthy

EVMappears healthy

sfc12rc1:/# crsctl stop crs       --stopcluster

Stoppingresources. This could take several minutes.

Successfullystopped CRS resources.

StoppingCSSD.

Shuttingdown CSS daemon.

Shutdownrequest successfully issued.

sfc12rc1:/#

sfc12rc1:/#

sfc12rc1:/# cmviewcl   --view information about a highavailability cluster

 

CLUSTER        STATUS      

clu_RACSFC12   up          

 

  NODE          STATUS       STATE       

  sfc12rc1      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP1    up            running       enabled      sfc12rc1   

    pkgSFC12DB1    up            running       disabled     sfc12rc1   

 

  NODE          STATUS       STATE       

  sfc12rc2      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP2    up            running       enabled      sfc12rc2   

    pkgSFC12DB2    up            running       disabled     sfc12rc2   

 

  NODE          STATUS       STATE       

  sfc12rc3      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP3    up           running       enabled      sfc12rc3   

    pkgSFC12DB3    up            running       disabled     sfc12rc3   

 

sfc12rc1:/# cmhaltcl             --check a a high availability cluster healthy

PackagepkgSFC12IP1 is still running on sfc12rc1.

PackagepkgSFC12IP2 is still running on sfc12rc2.

PackagepkgSFC12IP3 is still running on sfc12rc3.

PackagepkgSFC12DB1 is still running on sfc12rc1.

PackagepkgSFC12DB2 is still running on sfc12rc2.

PackagepkgSFC12DB3 is still running on sfc12rc3.

Usethe -f option to forcefully halt the cluster/node including halting packages.

sfc12rc1:/# cmhaltcl -f

Disablingall packages from starting on nodes to be halted.

Warning:  Do not modify or enable packages until thehalt operation is completed.

Disablingautomatic failover for failover packages to be halted.

Haltingpackage pkgSFC12IP1

Successfullyhalted package pkgSFC12IP1

Haltingpackage pkgSFC12DB1

Successfullyhalted package pkgSFC12DB1

Haltingpackage pkgSFC12IP2

Successfullyhalted package pkgSFC12IP2

Haltingpackage pkgSFC12DB2

Successfullyhalted package pkgSFC12DB2

Haltingpackage pkgSFC12IP3

Successfullyhalted package pkgSFC12IP3

Haltingpackage pkgSFC12DB3

Successfullyhalted package pkgSFC12DB3

Thisoperation may take some time.

Waitingfor nodes to halt ... done

Successfullyhalted all nodes specified.

Haltoperation complete.

sfc12rc1:/# cmviewcl      --checkos cluster is down

 

CLUSTER        STATUS      

clu_RACSFC12   down        

 

  NODE          STATUS       STATE       

  sfc12rc1      down         unknown     

  sfc12rc2      down         unknown     

  sfc12rc3      down         unknown     

   

UNOWNED_PACKAGES

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP1    down          halted        enabled      unowned    

    pkgSFC12IP2    down          halted        enabled      unowned    

    pkgSFC12IP3    down          halted        enabled      unowned    

    pkgSFC12DB1    down          halted        disabled     unowned    

    pkgSFC12DB2    down          halted        disabled     unowned    

    pkgSFC12DB3    down          halted        disabled     unowned    

sfc12rc1:/#

 

----------------------------------------------------------------------

4. create VG&LV in sfc12rc1, then export VGmapfile:

   Sfc12rc1:

   pvcreate /dev/rdisk/disk98

   pvcreate /dev/rdisk/disk103

   mkdir /dev/vg_ora_data04

   mknod /dev/vg_ora_data04/group c 64 0x070000

 

   vgcreate -l 10 -s 32 vg_ora_data04 /dev/disk/disk98

   vgextend vg_ora_data04 /dev/disk/disk103

   vgdisplay vg_ora_data04

   vgdisplay -v vg_ora_data04

 

   lvcreate -l 9597 -n lvdata05 vg_ora_data04

   lvcreate -l 9597 -n lvdata06 vg_ora_data04

 

   mkdir -p /tmp/20111231  --from sfc12rc1 to sfc12rc3

   vgexport -p -v -s -m/tmp/20111231/vg_ora_data04.map vg_ora_data04

   rcp /tmp/20111231/vg_ora_data04.mapsfc12rc2:/tmp/20111231

   rcp /tmp/20111231/vg_ora_data04.mapsfc12rc3:/tmp/20111231

 

sfc12rc1:/# pvcreate /dev/rdisk/disk98      --createphysical volume

Physicalvolume "/dev/rdisk/disk98" has been successfully created.

sfc12rc1:/# pvcreate /dev/rdisk/disk103     --createphysical volume

Physicalvolume "/dev/rdisk/disk103" has been successfully created.

sfc12rc1:/#ls -lrt /dev/vg_ora_data04

/dev/vg_ora_data04not found

 

 

sfc12rc1:/# mkdir /dev/vg_ora_data04    

sfc12rc1:/# mknod /dev/vg_ora_data04/group c 64 0x070000   

--最后编号与其他VG不同即可

sfc12rc1:/# ls -lrt /dev/vg_ora_data04  

total0

crw-r--r--   1 root      sys         64 0x070000 Dec 3108:53 group

 

sfc12rc1:/# vgdisplay vg_ora_data04  --display information about LVM volumegroups

---Volume groups ---

VGName                    /dev/vg_ora_data04

VGWrite Access             read/write    

VGStatus                   available                

MaxLV                      10    

CurLV                      0     

OpenLV                     0     

MaxPV                      16    

CurPV                      2     

ActPV                      2     

Max PEper PV               9599        

VGDA                        4  

PESize (Mbytes)            32             

TotalPE                    19198  

AllocPE                    0      

FreePE                     19198  

TotalPVG                   0       

TotalSpare PVs             0             

TotalSpare PVs in use      0                    

VGVersion                  1.0      

VG MaxSize                 4914688m  

VG MaxExtents              153584       

 

sfc12rc1:/# vgdisplay -v vg_ora_data04

---Volume groups ---

VGName                    /dev/vg_ora_data04

VGWrite Access             read/write    

VGStatus                   available                

MaxLV                      10    

CurLV                      0     

OpenLV                     0     

MaxPV                      16    

CurPV                      2     

ActPV                      2     

Max PE per PV              9599        

VGDA                        4  

PESize (Mbytes)            32             

TotalPE                    19198  

AllocPE                    0      

FreePE                     19198  

TotalPVG                   0       

TotalSpare PVs             0             

TotalSpare PVs in use      0                    

VGVersion                  1.0      

VG MaxSize                 4914688m  

VG MaxExtents              153584       

 

 

   --- Physical volumes ---

   PV Name                     /dev/disk/disk98

   PV Status                   available               

   Total PE                    9599   

   Free PE                     9599   

   Autoswitch                  On       

   Proactive Polling           On              

 

   PV Name                     /dev/disk/disk103

   PVStatus                   available               

   Total PE                    9599   

   Free PE                     9599   

   Autoswitch                  On       

   Proactive Polling           On              

 

 

sfc12rc1:/#

sfc12rc1:/# lvcreate -l 9597 -n lvdata05 vg_ora_data04  

--create logical volume / Max PE per PV 9599但是要预留2个pe给系统使用

Logicalvolume "/dev/vg_ora_data04/lvdata05" has been successfully createdwith

characterdevice "/dev/vg_ora_data04/rlvdata05".

Logicalvolume "/dev/vg_ora_data04/lvdata05" has been successfully extended.

VolumeGroup configuration for /dev/vg_ora_data04 has been saved in/etc/lvmconf/vg_ora_data04.conf

sfc12rc1:/# lvcreate -l 9597 -n lvdata06 vg_ora_data04

Logicalvolume "/dev/vg_ora_data04/lvdata06" has been successfully createdwith

characterdevice "/dev/vg_ora_data04/rlvdata06".

Logicalvolume "/dev/vg_ora_data04/lvdata06" has been successfully extended.

VolumeGroup configuration for /dev/vg_ora_data04 has been saved in/etc/lvmconf/vg_ora_data04.conf

sfc12rc1:/#

sfc12rc1:/# lvdisplay /dev/vg_ora_data04/lvdata05  --show LV information

---Logical volumes ---

LVName                    /dev/vg_ora_data04/lvdata05

VGName                    /dev/vg_ora_data04

LVPermission               read/write      

LVStatus                  available/syncd          

Mirrorcopies               0           

ConsistencyRecovery        MWC                

Schedule                    parallel     

LVSize (Mbytes)            307104         

CurrentLE                  9597     

AllocatedPE                9597       

Stripes                     0      

StripeSize (Kbytes)        0                  

Badblock                   on          

Allocation                  strict                   

IO Timeout(Seconds)        default            

 

sfc12rc1:/#

sfc12rc1:/# mkdir -p /tmp/20111231

sfc12rc1:/# vgexport -p -v -s -m/tmp/20111231/vg_ora_data04.map vg_ora_data04

--将vg配置信息导入到文件

Beginningthe export process on Volume Group "vg_ora_data04".

vgexport:Volume group "vg_ora_data04" is still active.

/dev/disk/disk98

/dev/disk/disk103

vgexport:Preview of vgexport on volume group "vg_ora_data04" succeeded.

sfc12rc1:/#

sfc12rc1:/# rcp /tmp/20111231/vg_ora_data04.mapsfc12rc2:/tmp/20111231

sfc12rc1:/# rcp /tmp/20111231/vg_ora_data04.mapsfc12rc3:/tmp/20111231

--将VG配置文件copy 到另外两个节点

sfc12rc1:/#

 

----------------------------------------------------------------------

5. import VG mapfile from sfc12rc1 to sfc12rc2:

   sfc12rc2:

   mkdir /dev/vg_ora_data04

   mknod /dev/vg_ora_data04/group c 64 0x070000

   vgimport -v -m/tmp/20111231/vg_ora_data04.map vg_ora_data04 /dev/disk/disk97 \

   /dev/disk/disk103

 

sfc12rc2:/# mkdir /dev/vg_ora_data04

sfc12rc2:/# mknod /dev/vg_ora_data04/group c 64 0x070000

sfc12rc2:/# ls -lrt /dev/vg_ora_data04

total0

crw-r--r--   1 root      sys         64 0x070000 Dec 3109:03 group

sfc12rc2:/# vgimport -v -m /tmp/20111231/vg_ora_data04.mapvg_ora_data04 /dev/disk/disk97 \

>   /dev/disk/disk103

--将vg配置信息导入到该节点

Beginningthe import process on Volume Group "vg_ora_data04".

vgimport:Warning:  Volume Group belongs todifferent CPU ID.

Cannot determine if Volume Group is in use on another system. Continuing.

Logicalvolume "/dev/vg_ora_data04/lvdata05" has been successfully created

withlv number 1.

Logicalvolume "/dev/vg_ora_data04/lvdata06" has been successfully created

withlv number 2.

vgimport:Volume group "/dev/vg_ora_data04" has been successfully created.

Warning:A backup of this volume group may not exist on this machine.

Pleaseremember to take a backup using the vgcfgbackup command after activating thevolume group.

sfc12rc2:/# vgdisplay vg_ora_data04 

vgdisplay:Volume group not activated.

vgdisplay:Cannot display volume group "vg_ora_data04".

sfc12rc2:/# vgdisplay -v vg_ora_data04    

vgdisplay:Volume group not activated.

vgdisplay:Cannot display volume group "vg_ora_data04".

sfc12rc2:/# lvdisplay /dev/vg_ora_data04/lvdata05

lvdisplay:Couldn't query logical volume "/dev/vg_ora_data04/lvdata05":

Volumegroup not activated.

 

lvdisplay:Cannot display logical volume "/dev/vg_ora_data04/lvdata05".

sfc12rc2:/#

 

----------------------------------------------------------------------

6. import VG mapfile from sfc12rc1 to sfc12rc3:

   sfc12rc3:

   mkdir /dev/vg_ora_data04

   mknod /dev/vg_ora_data04/group c 64 0x070000

   vgimport -v -m/tmp/20111231/vg_ora_data04.map vg_ora_data04 /dev/disk/disk99 \

   /dev/disk/disk104

 

 

sfc12rc3:/#

sfc12rc3:/# mkdir /dev/vg_ora_data04

sfc12rc3:/# mknod /dev/vg_ora_data04/group c 64 0x070000

sfc12rc3:/# vgimport -v -m /tmp/20111231/vg_ora_data04.mapvg_ora_data04 /dev/disk/disk99 \

>   /dev/disk/disk104

--将vg配置信息导入到该节点

Beginningthe import process on Volume Group "vg_ora_data04".

vgimport:Warning:  Volume Group belongs todifferent CPU ID.

Cannot determine if Volume Group is in use on another system. Continuing.

Logicalvolume "/dev/vg_ora_data04/lvdata05" has been successfully created

withlv number 1.

Logicalvolume "/dev/vg_ora_data04/lvdata06" has been successfully created

withlv number 2.

vgimport:Volume group "/dev/vg_ora_data04" has been successfully created.

Warning:A backup of this volume group may not exist on this machine.

Pleaseremember to take a backup using the vgcfgbackup command after activating thevolume group.

sfc12rc3:/#

 

----------------------------------------------------------------------

7. modify cluster configration file and apply insfc12rc1:

   sfc12rc1:

   cd /etc/cmcluster/

   备份并且修改/etc/cmcluster/cluster_sfc12.ascii 文件

   在原有的vg下面,加入新的卷組信息

   OPS_VOLUME_GROUP               /dev/vg_ora_data04

 

   cmcheckconf -v -C/etc/cmcluster/cluster_sfc12.ascii

 

   cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii

 

   rcp -p cluster_sfc12.asciisfc12rc2:/etc/cmcluster/

   rcp -p cluster_sfc12.asciisfc12rc3:/etc/cmcluster/

 

   cd pkgSFC12DB1

   修改pkgSFC12DB1配置

   备份并且在pkgSFC12DB1.cntl中加入

   VG[5]=vg_ora_data04

 

   cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii -P \

   /etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf

 

 

sfc12rc1:/# cd /etc/cmcluster/

sfc12rc1:/etc/cmcluster# ls -lrt

total208

-r--------   1 bin       bin            118 Mar 16  2007 cmknowncmds

drwxr-xr-x   2 bin       bin           8192 Oct 22  2008 cfs

dr-xr-xr-x   2 bin       bin             96 Oct 22  2008 examples

dr-xr-xr-x   4 root      root            96 Oct 22  2008 modules

dr-xr-xr-x   5 bin       bin             96 Jan  7  2009scripts

dr-xr-xr-x   2 bin       bin           8192 Jan  7  2009lib

-rw-r--r--   1 root      sys             11 Jan 15  2009 mapfile

----------   1 root      root             0 Jan 15  2009 config.lck

drwxr-xr-x   2 root      sys             96 Jan 15  2009 pkgSFC12IP1

drwxr-xr-x   2 root      sys             96 Feb  9  2009pkgSFC12IP3

drwxr-xr-x   2 root      sys             96 Feb 20 2009 pkgSFC12IP2

-rw-r--r--   1 root      sys          10458 Jun 29  2009 cluster_sfc12.ascii20100921

drwx------   2 root      sys           8192 Sep 21  2010 pkgSFC12DB1

-rw-r--r--   1 root      sys          10495 Oct  2  2010cluster_sfc12.ascii

-rw-------   1 root      root         30916 Dec 31 08:45cmclconfig

-rw-------   1 root      root             0 Dec 31 08:45cmclconfig.tmp

sfc12rc1:/etc/cmcluster# cp cluster_sfc12.asciicluster_sfc12.ascii20111231

-- –修改系统文件前先备份

sfc12rc1:/etc/cmcluster# vi/etc/cmcluster/cluster_sfc12.ascii

--在文件最后加入新的卷组信息OPS_VOLUME_GROUP /dev/vg_ora_data04

sfc12rc1:/etc/cmcluster#

sfc12rc1:/etc/cmcluster# cmcheckconf -v -C/etc/cmcluster/cluster_sfc12.ascii

---检查修改后的文件是否有错误

Begincluster verification...

Checkingcluster file: /etc/cmcluster/cluster_sfc12.ascii

Checkingnodes ... Done

Checkingexisting configuration ... Done

Gatheringstorage information

Found20 devices on node sfc12rc1

Found20 devices on node sfc12rc2

Found20 devices on node sfc12rc3

Analysisof 60 devices should take approximately 5 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Found8 volume groups on node sfc12rc1

Found8 volume groups on node sfc12rc2

Found8 volume groups on node sfc12rc3

Analysisof 24 volume groups should take approximately 1 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Gatheringnetwork information

Beginningnetwork probing (this may take a while)

Completednetwork probing

Clusterclu_RACSFC12 is an existing cluster

Clusterclu_RACSFC12 is an existing cluster

Checkingfor inconsistencies

Maximumconfigured packages parameter is 150.

Configuring6 package(s).

144package(s) can be added to this cluster.

200access policies can be added to this cluster.

Modifyingconfiguration on node sfc12rc1

Modifyingconfiguration on node sfc12rc2

Modifyingconfiguration on node sfc12rc3

Modifyingthe cluster configuration for cluster clu_RACSFC12

Modifyingnode sfc12rc1 in cluster clu_RACSFC12

Modifyingnode sfc12rc2 in cluster clu_RACSFC12

Modifyingnode sfc12rc3 in cluster clu_RACSFC12

cmcheckconf:Verification completed with no errors found.

Usethe cmapplyconf command to apply the configuration.

sfc12rc1:/etc/cmcluster#

sfc12rc1:/etc/cmcluster# cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii

--使修改生效

Begincluster verification...

Checkingcluster file: /etc/cmcluster/cluster_sfc12.ascii

Checkingnodes ... Done

Checkingexisting configuration ... Done

Gatheringstorage information

Found20 devices on node sfc12rc1

Found20 devices on node sfc12rc2

Found20 devices on node sfc12rc3

Analysisof 60 devices should take approximately 5 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Found8 volume groups on node sfc12rc1

Found8 volume groups on node sfc12rc2

Found8 volume groups on node sfc12rc3

Analysisof 24 volume groups should take approximately 1 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Gatheringnetwork information

Beginningnetwork probing (this may take a while)

Completednetwork probing

Clusterclu_RACSFC12 is an existing cluster

Clusterclu_RACSFC12 is an existing cluster

Checkingfor inconsistencies

Maximumconfigured packages parameter is 150.

Configuring6 package(s).

144package(s) can be added to this cluster.

200access policies can be added to this cluster.

Modifyingconfiguration on node sfc12rc1

Modifyingconfiguration on node sfc12rc2

Modifyingconfiguration on node sfc12rc3

Modifyingthe cluster configuration for cluster clu_RACSFC12

Modifyingnode sfc12rc1 in cluster clu_RACSFC12

Modifyingnode sfc12rc2 in cluster clu_RACSFC12

Modifyingnode sfc12rc3 in cluster clu_RACSFC12

 

Modifythe cluster configuration ([y]/n)? y

Marking/unmarkingvolume groups for use in the cluster

Completedthe cluster creation

sfc12rc1:/etc/cmcluster#

sfc12rc1:/etc/cmcluster# rcp -p cluster_sfc12.asciisfc12rc2:/etc/cmcluster/

sfc12rc1:/etc/cmcluster# rcp -p cluster_sfc12.asciisfc12rc3:/etc/cmcluster/

--将修改OK的cluster_sfc12.ascii文件copy到另外两个节点

sfc12rc1:/etc/cmcluster#

sfc12rc1:/etc/cmcluster# cd pkgSFC12DB1

sfc12rc1:/etc/cmcluster/pkgSFC12DB1# ls -lrt

total464

-rwx------   1 root      sys          26764 Jan 15  2009 pkgSFC12DB1.conf

-rwx------   1 root      sys          64281 Jun 29  2009 pkgSFC12DB1.cntl20100921

-rwx------   1 root      sys          64301 Oct  2  2010pkgSFC12DB1.cntl

-rw-r--r--   1 root      root         67729 Dec 31 08:47pkgSFC12DB1.cntl.log

sfc12rc1:/etc/cmcluster/pkgSFC12DB1#cp pkgSFC12DB1.cntl pkgSFC12DB1.cntl20111231

sfc12rc1:/etc/cmcluster/pkgSFC12DB1# vi pkgSFC12DB1.cntl

#VG[0]=""

 

VG[0]=vg_ora_vote

VG[1]=vg_ora_data01

VG[2]=vg_ora_arch01

VG[3]=vg_ora_data02

VG[4]=vg_ora_data03

VG[5]=vg_ora_data04    --新加

sfc12rc1:/etc/cmcluster/pkgSFC12DB1#

sfc12rc1:/etc/cmcluster/pkgSFC12DB1# cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii -P \

>   /etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf

Begincluster verification...

Checkingcluster file: /etc/cmcluster/cluster_sfc12.ascii

Checkingnodes ... Done

Checkingexisting configuration ... Done

Gatheringstorage information

Found20 devices on node sfc12rc1

Found20 devices on node sfc12rc2

Found20 devices on node sfc12rc3

Analysisof 60 devices should take approximately 5 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Found8 volume groups on node sfc12rc1

Found8 volume groups on node sfc12rc2

Found8 volume groups on node sfc12rc3

Analysisof 24 volume groups should take approximately 1 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Gatheringnetwork information

Beginningnetwork probing (this may take a while)

Completednetwork probing

Clusterclu_RACSFC12 is an existing cluster

Parsingpackage file: /etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf.

/etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf:A legacy package is being used.

PackagepkgSFC12DB1 already exists. It will be modified.

Clusterclu_RACSFC12 is an existing cluster

Checkingfor inconsistencies

Maximumconfigured packages parameter is 150.

Configuring6 package(s).

144package(s) can be added to this cluster.

200access policies can be added to this cluster.

Modifyingconfiguration on node sfc12rc1

Modifyingconfiguration on node sfc12rc2

Modifyingconfiguration on node sfc12rc3

Modifyingthe cluster configuration for cluster clu_RACSFC12

Modifyingnode sfc12rc1 in cluster clu_RACSFC12

Modifyingnode sfc12rc2 in cluster clu_RACSFC12

Modifyingnode sfc12rc3 in cluster clu_RACSFC12

Modifyingthe package configuration for package pkgSFC12DB1.

 

Modify the cluster configuration ([y]/n)? y

Marking/unmarkingvolume groups for use in the cluster

Completedthe cluster creation

sfc12rc1:/etc/cmcluster/pkgSFC12DB1#

 

---------------------------------------------------------------------

8. modify cluster configration file and apply insfc12rc2:

   sfc12rc2:

   cd /etc/cmcluster/pkgSFC12DB2

   修改pkgSFC12DB2配置

   备份并在pkgSFC12DB2.cntl中加入

   VG[5]=vg_ora_data04

 

   cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii -P \

   /etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.confz

 

sfc12rc2:/# cd /etc/cmcluster/pkgSFC12DB2

sfc12rc2:/etc/cmcluster/pkgSFC12DB2# ls -lrt

total448

-rwx------   1 root      sys          26764 Jan 15  2009 pkgSFC12DB2.conf

-rwx------   1 root      sys          64281 Jun 29  2009 pkgSFC12DB2.cntl20100921

-rwx------   1 root      sys          64301 Oct  2  2010pkgSFC12DB2.cntl

-rw-r--r--   1 root      root         65051 Dec 31 08:47pkgSFC12DB2.cntl.log

sfc12rc2:/etc/cmcluster/pkgSFC12DB2# vi pkgSFC12DB2.cntl

#VG[0]=""

 

VG[0]=vg_ora_vote

VG[1]=vg_ora_data01

VG[2]=vg_ora_arch01

VG[3]=vg_ora_data02

VG[4]=vg_ora_data03

VG[5]=vg_ora_data04    --新加

sfc12rc2:/etc/cmcluster/pkgSFC12DB2#

sfc12rc2:/etc/cmcluster/pkgSFC12DB2# cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii -P \

>   /etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.conf

Begincluster verification...

Checkingcluster file: /etc/cmcluster/cluster_sfc12.ascii

Checkingnodes ... Done

Checkingexisting configuration ... Done

Gatheringstorage information

Found20 devices on node sfc12rc1

Found20 devices on node sfc12rc2

Found20 devices on node sfc12rc3

Analysisof 60 devices should take approximately 5 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Found8 volume groups on node sfc12rc1

Found8 volume groups on node sfc12rc2

Found8 volume groups on node sfc12rc3

Analysisof 24 volume groups should take approximately 1 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Gatheringnetwork information

Beginningnetwork probing (this may take a while)

Completednetwork probing

Clusterclu_RACSFC12 is an existing cluster

Parsingpackage file: /etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.conf.

/etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.conf:A legacy package is being used.

PackagepkgSFC12DB2 already exists. It will be modified.

Clusterclu_RACSFC12 is an existing cluster

Checkingfor inconsistencies

Maximumconfigured packages parameter is 150.

Configuring6 package(s).

144package(s) can be added to this cluster.

200access policies can be added to this cluster.

Modifyingconfiguration on node sfc12rc1

Modifyingconfiguration on node sfc12rc2

Modifyingconfiguration on node sfc12rc3

Modifyingthe cluster configuration for cluster clu_RACSFC12

Modifyingnode sfc12rc1 in cluster clu_RACSFC12

Modifyingnode sfc12rc2 in cluster clu_RACSFC12

Modifyingnode sfc12rc3 in cluster clu_RACSFC12

Modifyingthe package configuration for package pkgSFC12DB2.

 

Modify the cluster configuration ([y]/n)? y

Marking/unmarkingvolume groups for use in the cluster

Completedthe cluster creation

sfc12rc2:/etc/cmcluster/pkgSFC12DB2#

 

---------------------------------------------------------------------

9. modify cluster configration file and apply insfc12rc3:

   sfc12rc3:

   cd /etc/cmcluster/pkgSFC12DB3

   修改pkgSFC12DB3配置

   备份并在pkgSFC12DB3.cntl中加入

   VG[5]=vg_ora_data04

 

   cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii -P \

   /etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf

 

 

sfc12rc3:/# cd /etc/cmcluster/pkgSFC12DB3

sfc12rc3:/etc/cmcluster/pkgSFC12DB3# ls -lrt

total448

-rwx------   1 root      sys          26764 Jan 15  2009 pkgSFC12DB3.conf

-rwx------   1 root      sys          64281 Jun 29  2009 pkgSFC12DB3.cntl20100921

-rwx------   1 root      sys          64301 Oct  2  2010pkgSFC12DB3.cntl

-rw-r--r--   1 root      root         62722 Dec 31 08:47pkgSFC12DB3.cntl.log

sfc12rc3:/etc/cmcluster/pkgSFC12DB3# cp pkgSFC12DB3.cntlpkgSFC12DB3.cntl20111231

sfc12rc3:/etc/cmcluster/pkgSFC12DB3# vi pkgSFC12DB3.cntl

#VG[0]=""

 

VG[0]=vg_ora_vote

VG[1]=vg_ora_data01

VG[2]=vg_ora_arch01

VG[3]=vg_ora_data02

VG[4]=vg_ora_data03

VG[5]=vg_ora_data04    --新加

sfc12rc3:/etc/cmcluster/pkgSFC12DB3# cmapplyconf -v -C/etc/cmcluster/cluster_sfc12.ascii -P \

>    /etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf

Begincluster verification...

Checkingcluster file: /etc/cmcluster/cluster_sfc12.ascii

Checkingnodes ... Done

Checkingexisting configuration ... Done

Gatheringstorage information

Found20 devices on node sfc12rc1

Found20 devices on node sfc12rc2

Found20 devices on node sfc12rc3

Analysisof 60 devices should take approximately 5 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Found8 volume groups on node sfc12rc1

Found8 volume groups on node sfc12rc2

Found8 volume groups on node sfc12rc3

Analysisof 24 volume groups should take approximately 1 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Gatheringnetwork information

Beginningnetwork probing (this may take a while)

Completednetwork probing

Clusterclu_RACSFC12 is an existing cluster

Parsingpackage file: /etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf.

/etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf:A legacy package is being used.

PackagepkgSFC12DB3 already exists. It will be modified.

Clusterclu_RACSFC12 is an existing cluster

Checkingfor inconsistencies

Maximumconfigured packages parameter is 150.

Configuring6 package(s).

144package(s) can be added to this cluster.

200access policies can be added to this cluster.

Modifyingconfiguration on node sfc12rc1

Modifyingconfiguration on node sfc12rc2

Modifyingconfiguration on node sfc12rc3

Modifyingthe cluster configuration for cluster clu_RACSFC12

Modifyingnode sfc12rc1 in cluster clu_RACSFC12

Modifyingnode sfc12rc2 in cluster clu_RACSFC12

Modifyingnode sfc12rc3 in cluster clu_RACSFC12

Modifyingthe package configuration for package pkgSFC12DB3.

 

Modify the cluster configuration ([y]/n)? y

Marking/unmarkingvolume groups for use in the cluster

Completedthe cluster creation

sfc12rc3:/etc/cmcluster/pkgSFC12DB3#

 

--------------------------------------------------------------------

10. start OS cluster in sfc12rc1:

    vgchange -a n vg_ora_data04

    cmruncl

    cmviewcl

    cmrunpkg -n sfc12rc1 pkgSFC12DB1

    cmrunpkg -n sfc12rc2 pkgSFC12DB2

    cmrunpkg -n sfc12rc3 pkgSFC12DB3

    cmviewcl

 

sfc12rc1:/#

sfc12rc1:/# vgchange -a n vg_ora_data04

Volumegroup "vg_ora_data04" has been successfully changed.

sfc12rc1:/# vgdisplay /dev/vg_ora_data04

vgdisplay:Volume group not activated.

vgdisplay:Cannot display volume group "/dev/vg_ora_data04".

sfc12rc1:/# cmruncl   --启动cluster

cmruncl:Validating network configuration...

cmruncl:Network validation complete

cmruncl:Validating cluster lock disk .... Done

Waitingfor cluster to form ....... done

Clustersuccessfully formed.

Checkthe syslog files on all nodes in the cluster to verify that no warningsoccurred during startup.

sfc12rc1:/# cmviewcl    --查看状态

CLUSTER        STATUS      

clu_RACSFC12   up          

 

  NODE          STATUS       STATE       

  sfc12rc1      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP1    up            running       enabled      sfc12rc1   

 

  NODE          STATUS       STATE       

  sfc12rc2      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP2    up            running       enabled      sfc12rc2   

 

  NODE          STATUS       STATE       

  sfc12rc3      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP3    up            running       enabled      sfc12rc3   

   

UNOWNED_PACKAGES

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12DB1    down          halted        disabled     unowned    

    pkgSFC12DB2    down          halted        disabled     unowned    

    pkgSFC12DB3    down          halted        disabled     unowned    

sfc12rc1:/# cmrunpkg -n sfc12rc1 pkgSFC12DB1 -PKG不会自动启动需要手动启动

Runningpackage pkgSFC12DB1 on node sfc12rc1

Successfullystarted package pkgSFC12DB1 on node sfc12rc1

cmrunpkg:All specified packages are running

sfc12rc1:/# cmrunpkg -n sfc12rc2 pkgSFC12DB2

Runningpackage pkgSFC12DB2 on node sfc12rc2

Successfullystarted package pkgSFC12DB2 on node sfc12rc2

cmrunpkg:All specified packages are running

sfc12rc1:/# cmrunpkg -n sfc12rc3 pkgSFC12DB3

Runningpackage pkgSFC12DB3 on node sfc12rc3

Successfullystarted package pkgSFC12DB3 on node sfc12rc3

cmrunpkg:All specified packages are running

sfc12rc1:/# cmviewcl         --showcluster status              

 

CLUSTER        STATUS      

clu_RACSFC12   up          

 

  NODE          STATUS       STATE       

  sfc12rc1      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP1    up            running       enabled      sfc12rc1   

    pkgSFC12DB1    up            running       disabled     sfc12rc1   

 

  NODE          STATUS       STATE       

  sfc12rc2      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP2    up           running       enabled      sfc12rc2   

    pkgSFC12DB2    up            running       disabled     sfc12rc2   

 

  NODE          STATUS       STATE       

  sfc12rc3      up           running     

 

    PACKAGE        STATUS        STATE         AUTO_RUN     NODE       

    pkgSFC12IP3    up            running       enabled      sfc12rc3   

    pkgSFC12DB3    up            running       disabled     sfc12rc3   

sfc12rc1:/#

 

-------------------------------------------------------------------------------

11. chdange rlv properties and start oracle clusterfrom sfc12rc1 to sfc12rc3:

    chown oracle:dba/dev/vg_ora_data04/rlvdata05 

    chmod 660 /dev/vg_ora_data04/rlvdata05

 

    chown oracle:dba/dev/vg_ora_data04/rlvdata06

    chmod 660 /dev/vg_ora_data04/rlvdata06

 

    ls -lrt /dev/vg_ora_data04/rlvdata*

 

    vi /apps/oracle/admin/+ASM/pfile/init.ora

    append",'/dev/vg_ora_data04/rlvdata05','/dev/vg_ora_data04/rlvdata06'" toasm_diskstring

 

    crsctl start crs

 

sfc12rc1:/#

sfc12rc1:/# chown oracle:dba /dev/vg_ora_data04/rlvdata05   --changeowner

sfc12rc1:/# chmod 660 /dev/vg_ora_data04/rlvdata05        --changemode

sfc12rc1:/# chown oracle:dba /dev/vg_ora_data04/rlvdata06

sfc12rc1:/# chmod 660 /dev/vg_ora_data04/rlvdata06

sfc12rc1:/# ls -lrt /dev/vg_ora_data04/rlvdata*

crw-rw----   1 oracle    dba         64 0x070001 Dec 3108:57 /dev/vg_ora_data04/rlvdata05

crw-rw----   1 oracle    dba         64 0x070002 Dec 3108:57 /dev/vg_ora_data04/rlvdata06

sfc12rc1:/#

sfc12rc1:/# crsctl start crs  --start crs

Attemptingto start CRS stack

TheCRS stack will be started shortly

sfc12rc1:/# crsctl check crs --check crs status

CSSappears healthy

CRSappears healthy

EVMappears healthy

sfc12rc1:/#

 

--------------------------------------------------------------------------------

12. check all node find new disks,then add 2 disks toDGDATA diskgroup:

   

 

   column name format a20

   selectname,state,type,total_mb,free_mb,unbalanced from v$asm_diskgroup;

   select name,path,total_mb,free_mb,MOUNT_STATUS,HEADER_STATUS,MODE_STATUSfrom v$asm_disk order by 1,2;

 

   alter diskgroup dgdata add disk'/dev/vg_ora_data04/rlvdata05';

   alter diskgroup dgdata add disk'/dev/vg_ora_data04/rlvdata06';

   --

   alter diskgroup dgdata rebalance power 11;

 

   select * from v$asm_operation;

   select name,total_mb,free_mb,unbalanced fromv$asm_diskgroup;

 

    selectgroup_number,name,path,total_mb,free_mb,header_status from v$asm_disk order by1,2;

 

idle>    column nameformat a20

idle>    selectname,state,type,total_mb,free_mb,unbalanced from v$asm_diskgroup;

 

NAME                 STATE                  TYPE           TOTAL_MB    FREE_MB UN

------------------------------------------ ------------ ---------- ---------- --

DGARCH               MOUNTED                EXTERN           307104     304314 N

DGDATA               MOUNTED                EXTERN          1228416     168358 N

 

idle>    selectname,path,total_mb,free_mb,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS fromv$asm_disk order by 1,2;

 

NAME                 PATH                             TOTAL_MB    FREE_MB MOUNT_STATUS   HEADER_STATUS            MODE_STATUS

-------------------------------------------------- ---------- ---------- -------------- --------------------------------------

DGARCH_0000          /dev/vg_ora_arch01/rlvarch01       307104     304314 CACHED         MEMBER                   ONLINE

DGDATA_0000          /dev/vg_ora_data01/rlvdata01       307104      42084 CACHED         MEMBER                   ONLINE

DGDATA_0001          /dev/vg_ora_data02/rlvdata02       307104      42090 CACHED         MEMBER                   ONLINE

DGDATA_0002          /dev/vg_ora_data03/rlvdata04       307104      42096 CACHED         MEMBER                   ONLINE

DGDATA_0003          /dev/vg_ora_data03/rlvdata03       307104      42088 CACHED         MEMBER                   ONLINE

                    /dev/vg_ora_data04/rlvdata05      307104          0 CLOSED         CANDIDATE                ONLINE

                    /dev/vg_ora_data04/rlvdata06      307104          0 CLOSED         CANDIDATE                ONLINE

 

7 rowsselected.

 

idle>

idle> alter diskgroup dgdata add disk'/dev/vg_ora_data04/rlvdata05';

--add disk to diskgroup

Diskgroupaltered.

 

idle> alter diskgroup dgdata add disk'/dev/vg_ora_data04/rlvdata06';

--add disk to diskgroup

Diskgroupaltered.

 

idle> alter diskgroup dgdata rebalance power 11; --rebalance diskgroup

 

Diskgroupaltered.

 

idle>

idle>

idle>    select *from v$asm_operation;

   selectname,total_mb,free_mb,unbalanced from v$asm_diskgroup;

 

GROUP_NUMBEROPERATION  STATE         POWER     ACTUAL     SOFAR   EST_WORK   EST_RATE EST_MINUTES

---------------------- -------- ---------- ---------- ---------- ---------- ---------------------

           2 REBAL      RUN              11         11         35    353309       6160          57

 

idle>

    selectgroup_number,name,path,total_mb,free_mb,header_status from v$asm_disk order by1,2;

NAME                   TOTAL_MB    FREE_MB UN

------------------------------ ---------- --

DGARCH                   307104     304314 N

DGDATA                  1842624     782558 N

 

idle>idle>

 

GROUP_NUMBERNAME                 PATH                             TOTAL_MB    FREE_MB HEADER_STATUS

-------------------------------- ------------------------------ ---------- ----------------------------------

           1 DGARCH_0000          /dev/vg_ora_arch01/rlvarch01       307104     304314 MEMBER

           2 DGDATA_0000          /dev/vg_ora_data01/rlvdata01       307104      42095 MEMBER

           2 DGDATA_0001          /dev/vg_ora_data02/rlvdata02       307104      42102 MEMBER

           2 DGDATA_0002          /dev/vg_ora_data03/rlvdata04       307104      42107 MEMBER

           2 DGDATA_0003          /dev/vg_ora_data03/rlvdata03       307104      42098 MEMBER

           2 DGDATA_0004          /dev/vg_ora_data04/rlvdata05       307104     307078 MEMBER

           2 DGDATA_0005          /dev/vg_ora_data04/rlvdata06       307104     307078 MEMBER

 

7 rowsselected.

 

idle>

 

-------------------------------------------------------------------------------------------------------------

 

  --do from sfc12rc1 to sfc12rc3 if necessary,check init file is right:

   alter system set asm_diskstring='/dev/vg_ora_arch01/rlvarch01','/dev/vg_ora_data01/rlvdata01','/dev/vg_ora_data02/rlvdata02','/dev/vg_ora_data03/rlvdata03','/dev/vg_ora_data03/rlvdata04','/dev/vg_ora_data04/rlvdata05','/dev/vg_ora_data04/rlvdata06';

 


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

潇湘秦

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值