Oracle ASM存储转移

ASM存储方式随着ORACLE 10g的推出,作为oracle一直力推的一个存储新特性。尽管在稳定性和和操作性还有有待改进,但许多企业已经把ASM存储方式部署在核心业务系统中 了。 ASM特性为我们做迁移提供了更多的选择方案,无需借助第三方工具,迁移简便,极短的停机时间。

        本次通过一个存储迁移的例子,来了解ASM在数据迁移方面的一些新特性,以及操作演练。系统环境为一个2节点的RAC系统,原先创建时就已使用ASM方式存储数据文件、控制文件、日志文件。

        方案一:

        利用ASM的热添加/删除磁盘的方式来完成迁移。该方案的实施原理是通过ASM磁盘热点自动平衡的特性,当往磁盘组里添加磁盘组               时,asm将自动复制镜像到新的磁盘组。

       1.划分裸设备(raw)或者asmdisk,检测asm参数如asm_diskstring,来使得新添加的存储设备能够被asm实例所识别。

       2.将新的磁盘添加到原有的磁盘组中。

       3.删除原有的磁盘设备。

       注意:在执行第3步时,先通过观察v$asm_operation视图来判断数据重组,保证整个diskgroup空间充裕。该方案优点在于迁移过程中         系统零停机,但是整个过程的操作进度不可控制,数据重组进度和风险无法把握。

        方案二:

         利用RMAN switch copy的方式完成存储转移。除了asm,其他方式也能够通过灵活运用switch copy功能进行数据转移,免去了许多繁            琐的操作。

         1.划分裸设备(raw)或者asmdisk,检测asm参数如asm_diskstring,来使得新添加的存储设备能够被asm实例所识别。

         2.创建新的asm卷组。

         3.在线RMAN backup copy到新的磁盘组上。

         4.修改参数文件,指向新的控制文件位置的路径,重启实例到nomount状态下,然后使用RMAN把controlfile迁移到新的路径下,使用               RMAN切换数据库到备份,然后恢复数据库。

         5,迁移temp和logfile文件。

         6.修改db_create_file_dest、db_create_online_log_dest*、archive_log_dest_*等参数和卸载旧的磁盘组。

         注意:旧的磁盘组一定要最后确认一切无误才能卸载,而且重做日志一定要保留用以停机恢复,recover的时间决定了停机时间。该方案风险可控,需要评 估backup copy 到switch copy 这段时间之间的日志生成量,这将影响停机恢复时间。所以能申请到停机时间,那就再好不过了。

 

        以下我们选择以方案二作为实施指导方针,系统环境为一个2节点的RAC系统,原先创建时就已使用ASM方式存储数据文件、控制文件、日志文件。目标是转移到新的存储设备上,新的存储设备也使用ASM存储方案。、

        

1.原来的磁盘设备以及刚接上去的磁盘设备

[root@rac2 etc]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1        2418    19422553+  83  Linux

/dev/sda2            2419        2609     1534207+  82  Linux swap

Disk /dev/sdb: 536 MB, 536870912 bytes

64 heads, 32 sectors/track, 512 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

  Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1         512      524272   83  Linux

Disk /dev/sdc: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1         391     3140676   83  Linux

Disk /dev/sdd: 3221 MB, 3221225472 bytes

255 heads, 63 sectors/track, 391 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1               1         391     3140676   83  Linux

Disk /dev/sde: 2147 MB, 2147483648 bytes

255 heads, 63 sectors/track, 261 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sde1               1         261     2096451   83  Linux

Disk /dev/sdf: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn't contain a valid partition table

Disk /dev/sdg: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdg doesn't contain a valid partition table

Disk /dev/sdh: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdh doesn't contain a valid partition table

Disk /dev/sdi: 107 MB, 107374080 bytes

64 heads, 32 sectors/track, 102 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdi doesn't contain a valid partition table

Disk /dev/sdj: 118 MB, 107374080 bytes

64 heads, 32 sectors/track, 102 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdi doesn't contain a valid partition table

 

2.对新磁盘进行分区

[root@rac2 dev]# ll sd*

[root@rac1 ~]# cd /dev

[root@rac1 dev]# ll sd*

brw-rw----  1 root disk 8,   0 Oct  4 04:33 sda

brw-rw----  1 root disk 8,   1 Oct  4 04:33 sda1

brw-rw----  1 root disk 8,   2 Oct  4 04:33 sda2

brw-rw----  1 root disk 8,  16 Oct  4 04:33 sdb

brw-rw----  1 root disk 8,  17 Oct  4 04:33 sdb1

brw-rw----  1 root disk 8,  32 Oct  4 04:33 sdc

brw-rw----  1 root disk 8,  33 Oct  4 08:34 sdc1

brw-rw----  1 root disk 8,  48 Oct  4 04:33 sdd

brw-rw----  1 root disk 8,  49 Oct  4 08:34 sdd1

brw-rw----  1 root disk 8,  64 Oct  4 04:33 sde

brw-rw----  1 root disk 8,  65 Oct  4 08:34 sde1

brw-rw----  1 root disk 8,  80 Oct  4 04:33 sdf

brw-rw----  1 root disk 8,  96 Oct  4 04:33 sdg

brw-rw----  1 root disk 8, 112 Oct  4 04:33 sdh

brw-rw----  1 root disk 8, 128 Oct  4 04:33 sdi

brw-rw----  1 root disk 8, 128 Oct  4 04:33 sdj

root@rac2 dev]# fdisk /dev/sdf

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-15000, default 1): 

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-15000, default 15000): 

Using default value 15000

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

sdf--sdi  如法泡制

 
分区完毕后:

[root@rac1 dev]# ll sd*

brw-rw----  1 root disk 8,   0 Oct  4 04:33 sda

brw-rw----  1 root disk 8,   1 Oct  4 04:33 sda1

brw-rw----  1 root disk 8,   2 Oct  4 04:33 sda2

brw-rw----  1 root disk 8,  16 Oct  4 04:33 sdb

brw-rw----  1 root disk 8,  17 Oct  4 04:33 sdb1

brw-rw----  1 root disk 8,  32 Oct  4 04:33 sdc

brw-rw----  1 root disk 8,  33 Oct  4 08:34 sdc1

brw-rw----  1 root disk 8,  48 Oct  4 04:33 sdd

brw-rw----  1 root disk 8,  49 Oct  4 08:34 sdd1

brw-rw----  1 root disk 8,  64 Oct  4 04:33 sde

brw-rw----  1 root disk 8,  65 Oct  4 08:34 sde1

brw-rw----  1 root disk 8,  80 Oct  4 04:33 sdf

brw-rw----  1 root disk 8,  81 Oct  4 08:43sdf1

brw-rw----  1 root disk 8,  96 Oct  4 04:33 sdg

brw-rw----  1 root disk 8,  97 Oct  4 08:44 sdg1

brw-rw----  1 root disk 8, 112 Oct  4 04:33 sdh

brw-rw----  1 root disk 8, 113 Oct  4 08:44 sdh1

brw-rw----  1 root disk 8, 128 Oct  4 04:33 sdi

brw-rw----  1 root disk 8, 129 Oct  4 08:45 sdi1

brw-rw----  1 root disk 8, 129 Oct  4 08:45 sdj

brw-rw----  1 root disk 8, 129 Oct  4 08:45 sdj1

 
 

我们计划:

sdf1、sdg1、sdh1 

为ASM新磁盘组所用

sdh1                        为存放votedisk所用。使用ORACLE  OCFS2文件管理系统。

sdj1                          为存放OCR所用,裸设备。


3.查看原来的磁盘绑定的ASM卷

[root@rac1 etc]# /etc/init.d/oracleasm listdisks

VOL1

VOL2

VOL3

 

原始设备与块设备绑定,两个节点都要执行

编辑、添加vi /etc/sysconfig/rawdevices 

/dev/raw/raw4 /dev/sdf1

/dev/raw/raw5 /dev/sdg1

/dev/raw/raw6 /dev/sdh1

/dev/raw/raw7 /dev/sdj1

  

想要映射成功,可以重启裸设备服务

 

[root@rac1 ~]# service rawdevices restart

Assigning devices: 

           /dev/raw/raw1  -->   /dev/sdc1

Error setting raw device (Device or resource busy)

           /dev/raw/raw2  -->   /dev/sdd1

Error setting raw device (Device or resource busy)

           /dev/raw/raw3  -->   /dev/sde1

Error setting raw device (Device or resource busy)

           /dev/raw/raw4  -->   /dev/sdf1

/dev/raw/raw4:  bound to major 8, minor 81

           /dev/raw/raw5  -->   /dev/sdg1

/dev/raw/raw5:  bound to major 8, minor 97

           /dev/raw/raw6  -->   /dev/sdh1

/dev/raw/raw6:  bound to major 8, minor 113

           /dev/raw/raw7  -->   /dev/sdj1

/dev/raw/raw7:  bound to major 8, minor 145

注:修改 /etc/udev/permissions.d/50-udev.permissions。原始设备在引导时会重新映射。默认情况下,在引导时原始设备 的拥有者将更改为 root 用户。如果拥有者不是 oracle 用户,则 ASM 在访问共享分区时会出现问题。在 /etc/udev/permissions.d/50-udev.permissions 中为原始行“raw/*:root:disk:0660”添加注释,然后添加一个新行“raw/*:oracle:dba:0660”。

/etc/udev/permissions.d/50-udev.permissions 

# raw devices

ram*:root:disk:0660

#raw/*:root:disk:0660

raw/*:oracle:dba:0660

 

4.权限设置,两个节点均要设置

[root@rac1 raw]# chown oracle:dba raw4 raw5 raw6

[root@rac1 raw]# ll

total 0

crw-rw----  oracle dba 162, 1 Oct  2 17:08 raw1

crw-rw----  oracle dba 162, 2 Oct  2 17:08 raw2

crw-rw----  oracle dba 162, 3 Oct  2 17:08 raw3

crw-rw----  oracle dba 162, 4 Oct  2 17:08 raw4

crw-rw----  oracle dba 162, 5 Oct  2 17:08 raw5

crw-rw----  oracle dba 162, 5 Oct  2 17:08 raw6

 

5.在两个节点分别以ORACLE身份执行

rac1-> ln -sf /dev/raw/raw4 /u01/oracle/oradata/devdb/asmdisk4

rac1-> ln -sf /dev/raw/raw5 /u01/oracle/oradata/devdb/asmdisk5

rac1-> ln -sf /dev/raw/raw6 /u01/oracle/oradata/devdb/asmdisk6

 

 

6.节点1绑定ASM卷

[root@rac1 raw]# /etc/init.d/oracleasm createdisk VOL4 /dev/sdf1

Marking disk "/dev/sdf1" as an ASM disk: [  OK  ]

[root@rac1 raw]# /etc/init.d/oracleasm createdisk VOL5 /dev/sdg1

Marking disk "/dev/sdg1" as an ASM disk: [  OK  ]

[root@rac1 raw]# /etc/init.d/oracleasm createdisk VOL6 /dev/sdh1

Marking disk "/dev/sdh1" as an ASM disk: [  OK  ]

[root@rac1 raw]# /etc/init.d/oracleasm listdisks

VOL1

VOL2

VOL3

VOL4

VOL5

VOL6 

 

 节点2执行

[root@rac2 dev]# /etc/init.d/oracleasm listdisks

VOL1

VOL2

VOL3

[root@rac2 dev]# /etc/init.d/oracleasm scandisks

Scanning system for ASM disks: [  OK  ]

[root@rac2 dev]# /etc/init.d/oracleasm listdisks

VOL1

VOL2

VOL3

VOL4

VOL5

VOL6

 

7.创建OCFS

登录rac1桌面 

ocfs2console

tasks -->format

选择/dev/sdi1

两节点root创建文件夹

[root@rac1 ~]# mkdir /ocfs2

[root@rac1 /]# chown -R oracle:dba  /ocfs2/

挂载文件系统。要挂载文件系统,在两个节点上root执行以下命令。 

# mount -t ocfs2 -o datavolume,nointr /dev/sdi1 /ocfs2

要在引导时挂载文件系统,在两个节点的 /etc/fstab 中添加以下行。 

/etc/fstab

/dev/sdi1 /ocfs2 ocfs2 _netdev,datavolume,nointr 0 0

 

 

 

8.

创建ASM磁盘

oracle身份登录rac1 桌面 

输入:dbca





创建完毕

 

9.数据转移

在节点1执行RMAN

 

RMAN> backup as copy database format '+NDG1';

Starting backup at 04-OCT-11

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=147 instance=devdb1 devtype=DISK

channel ORA_DISK_1: starting datafile copy

input datafile fno=00001 name=+DG1/devdb_rac/datafile/system.256.761942223

output filename=+NDG1/devdb_rac/datafile/system.256.763645653 tag=TAG20111004T114731 recid=5 stamp=763645666

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15

channel ORA_DISK_1: starting datafile copy

input datafile fno=00003 name=+DG1/devdb_rac/datafile/sysaux.257.761942225

output filename=+NDG1/devdb_rac/datafile/sysaux.257.763645667 tag=TAG20111004T114731 recid=6 stamp=763645676

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15

channel ORA_DISK_1: starting datafile copy

input datafile fno=00005 name=+DG1/devdb_rac/datafile/example.264.761942429

output filename=+NDG1/devdb_rac/datafile/example.258.763645683 tag=TAG20111004T114731 recid=7 stamp=763645688

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07

channel ORA_DISK_1: starting datafile copy

input datafile fno=00009 name=+DG1/devdb_rac/datafile/streams_ts.271.762135319

output filename=+NDG1/devdb_rac/datafile/streams_ts.259.763645691 tag=TAG20111004T114731 recid=8 stamp=763645693

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07

channel ORA_DISK_1: starting datafile copy

input datafile fno=00010 name=+DG1/devdb_rac/datafile/orabmtest.272.762138449

output filename=+NDG1/devdb_rac/datafile/orabmtest.260.763645697 tag=TAG20111004T114731 recid=9 stamp=763645700

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07

channel ORA_DISK_1: starting datafile copy

input datafile fno=00002 name=+DG1/devdb_rac/datafile/undotbs1.258.761942225

output filename=+NDG1/devdb_rac/datafile/undotbs1.261.763645705 tag=TAG20111004T114731 recid=10 stamp=763645706

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03

channel ORA_DISK_1: starting datafile copy

input datafile fno=00006 name=+DG1/devdb_rac/datafile/undotbs2.265.761942785

output filename=+NDG1/devdb_rac/datafile/undotbs2.262.763645707 tag=TAG20111004T114731 recid=11 stamp=763645708

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03

channel ORA_DISK_1: starting datafile copy

input datafile fno=00007 name=+DG1/devdb_rac/datafile/sttest.269.762038641

output filename=+NDG1/devdb_rac/datafile/sttest.263.763645711 tag=TAG20111004T114731 recid=12 stamp=763645711

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01

channel ORA_DISK_1: starting datafile copy

copying current control file

output filename=+NDG1/devdb_rac/controlfile/backup.264.763645711 tag=TAG20111004T114731 recid=13 stamp=763645713

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03

channel ORA_DISK_1: starting datafile copy

input datafile fno=00004 name=+DG1/devdb_rac/datafile/users.259.761942225

output filename=+NDG1/devdb_rac/datafile/users.265.763645715 tag=TAG20111004T114731 recid=14 stamp=763645715

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01

channel ORA_DISK_1: starting datafile copy

input datafile fno=00008 name=+DG1/devdb_rac/datafile/sttest0.270.762038681

output filename=+NDG1/devdb_rac/datafile/sttest0.266.763645717 tag=TAG20111004T114731 recid=15 stamp=763645716

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01

channel ORA_DISK_1: starting full datafile backupset

channel ORA_DISK_1: specifying datafile(s) in backupset

including current SPFILE in backupset

channel ORA_DISK_1: starting piece 1 at 04-OCT-11

channel ORA_DISK_1: finished piece 1 at 04-OCT-11

piece handle=+NDG1/devdb_rac/backupset/2011_10_04/nnsnf0_tag20111004t114731_0.267.763645719 tag=TAG20111004T114731 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02

Finished backup at 04-OCT-11

 
接下来修改参数文件: 

alter system set control_files='+NDG1/devdb_rac/controlfile/controldevdb' scope=spfile sid='*';

注:control_files='+NDG1/devdb_rac/controlfile/controldevdb'  指的是新ASM磁盘组地址,后面执行恢复控制文件将自动output到这个地址。
转移spfile到新的ASM组地址
由于在RAC环境中,各个节点共用一个参数文件,所以必须存放在共享设备中。
通过修改init文件指向新的ASM磁盘组地址:
两个节点都要执行!

rac1-> cd /u01/oracle/product/10.2.0/db_1/dbs/
rac1-> ll
total 15296
-rw-rw----  1 oracle oinstall     1525 Oct 10 08:04 ab_+ASM1.dat
-rw-r-----  1 oracle oinstall     1544 Sep 15 18:31 hc_+ASM1.dat
-rw-r-----  1 oracle oinstall     1544 Sep 15 18:36 hc_devdb1.dat
lrwxrwxrwx  1 oracle oinstall       37 Sep 15 18:31 init+ASM1.ora -> /u01/oracle/admin/+ASM/pfile/init.ora
-rw-r-----  1 oracle oinstall       40 Sep 15 18:47 initdevdb1.ora
-rw-r-----  1 oracle oinstall    12920 May  3  2001 initdw.ora
-rw-r-----  1 oracle oinstall     8385 Sep 11  1998 init.ora
-rw-r-----  1 oracle oinstall     1536 Sep 15 18:31 orapw+ASM1
-rw-r-----  1 oracle oinstall     1536 Sep 15 18:48 orapwdevdb1
-rw-r-----  1 oracle oinstall 15548416 Oct 10 08:40 snapcf_devdb1.f
rac1-> more initdevdb1.ora
SPFILE='+DG1/devdb_rac/spfiledevdb.ora'
rac1-> vi initdevdb1.ora

SPFILE='+NDG1/devdb_rac/spfiledevdb.ora'

 

其中一个节点执行:
SQL> create pfile='/u01/pfile1010' from spfile;
SQL> create spfile='+NDG1/devdb_rac/spfiledevdb.ora' from pfile='/u01/pfile1010';
 
重启所有节点到nomount状态
show parameter 检查control_files参数是否修改成功。可修改刚刚创建的pfile,然后再创建spfile方式修改该参数。
 
节点1使用RMAN恢复

RMAN> restore controlfile from '+DG1/DEVDB_RAC/CONTROLFILE/Current.260.761942341';

Starting restore at 04-OCT-11

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=146 instance=devdb2 devtype=DISK

channel ORA_DISK_1: copied control file copy

output filename=+DG1/devdb_rac/controlfile/current.260.761942341

output filename=+RECOVERYDEST/devdb_rac/controlfile/current.256.761942341

Finished restore at 04-OCT-11


然后
SQL> alter database mount;

Database altered.


进行恢复

RMAN> switch database to copy;

 

released channel: ORA_DISK_1

datafile 1 switched to datafile copy "+NDG1/devdb_rac/datafile/system.257.763648965"

datafile 2 switched to datafile copy "+NDG1/devdb_rac/datafile/undotbs1.262.763649013"

datafile 3 switched to datafile copy "+NDG1/devdb_rac/datafile/sysaux.258.763648979"

datafile 4 switched to datafile copy "+NDG1/devdb_rac/datafile/users.264.763649031"

datafile 5 switched to datafile copy "+NDG1/devdb_rac/datafile/example.259.763648995"

datafile 6 switched to datafile copy "+NDG1/devdb_rac/datafile/undotbs2.263.763649019"

datafile 7 switched to datafile copy "+NDG1/devdb_rac/datafile/sttest.265.763649023"

datafile 8 switched to datafile copy "+NDG1/devdb_rac/datafile/sttest0.256.763649035"

datafile 9 switched to datafile copy "+NDG1/devdb_rac/datafile/streams_ts.260.763648999"

datafile 10 switched to datafile copy "+NDG1/devdb_rac/datafile/orabmtest.261.763649005"


执行恢复

RMAN>RECOVER DATABASE

 

尝试打开数据库

RMAN> alter database open;

database opened

 

数据文件及控制文件转移成功!

 

10.转移临时表空间和联机日志文件

重新创建临时表空间

create temporary tablespace  TEMP2 TEMPFILE '+NDG1/devdb_rac/datafile/temp2' SIZE 512M ;

 

改变缺省临时表空间

alter database default temporary tablespace temp2;

 

创建日志文件(新设计日志,每个节点3组日志,每个日志组有两个成员,50M,两个成员在不同的ASM磁盘组)

注:一日子组里,不同成员建议在不同的磁盘中。

alter database add logfile  thread 1 group 5 '+NDG1/devdb_rac/onlinelog/log1' SIZE 50M;

alter database add logfile  thread 1 group 6 '+NDG1/devdb_rac/onlinelog/log2' SIZE 50M;

alter database add logfile  thread 1 group 7 '+NDG1/devdb_rac/onlinelog/log3' SIZE 50M;

 

alter database add logfile  thread 2 group 8 '+NDG1/devdb_rac/onlinelog/log4' SIZE 50M;

alter database add logfile  thread 2 group 9 '+NDG1/devdb_rac/onlinelog/log5' SIZE 50M;

alter database add logfile  thread 2 group 10 '+NDG1/devdb_rac/onlinelog/log6' SIZE 50M;

 

NRECOVERY/ 磁盘组下

alter database add logfile member '+NRECOVERY/onlinelog/log1' to group 5 ;

alter database add logfile member '+NRECOVERY/onlinelog/log2' to group 6 ;

alter database add logfile member '+NRECOVERY/onlinelog/log3' to group 7 ;

 

alter database add logfile member '+NRECOVERY/onlinelog/log4' to group 8 ;

alter database add logfile member '+NRECOVERY/onlinelog/log5' to group 9 ;

alter database add logfile member '+NRECOVERY/onlinelog/log6' to group 10 ;


删除旧的联机日志
先把当前日志切换到新的日志组,再删除。

切换日志

SQL> alter system switch logfile;

删除

SQL> alter database drop logfile group x

如果发生错误 可以先清空联机日志

alter database clear unarchived logfile group 1

 

11.善后

修改OMF参数,其他参数原本为空,所以我没理它。

alter system set db_create_file_dest='+NDG1' scope=spfile sid='*';

 
12.迁移votedisk
在RAC1执行查看原来地址
rac1-> crsctl query css votedisk;
  0.         0        /ocfs/clusterware/votingdisk
 
在RAC1以root执行
关闭crs
[root@rac1 ~]# /u01/ oracle/product/10.2.0/crs_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued. 
 
添加一个votedisk到新的地址
[root@rac1 ~]#   /u01/ oracle/product/10.2.0/crs_1/bin/crsctl add css votedisk /ocfs2/votingdisk -force
Now formatting voting disk: /ocfs2/votingdisk
successful addition of votedisk /ocfs2/votingdisk.
 
删除旧的votedisk地址
[root@rac1 ~]#   /u01/ oracle/product/10.2.0/crs_1/bin/crsctl delete css votedisk /ocfs/clusterware/votingdisk -force
successful deletion of votedisk /ocfs/clusterware/votingdisk.
 
注:记得修改权限喔!
[root@rac1 ocfs2]# ll
total 10001
-rw-r--r--   1 root root               0 Oct   9 08:30 hello
drwxr-xr-x   2 root root         1024 Oct   9 08:27 lost+found
-rw-r--r--   1 root root 10240000 Oct 10 23:27 votingdisk
[root@rac1 ocfs2]# chown  oracle:dba votingdisk
 
 
13.迁移OCR
查看OCR信息

rac2-> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262144
         Used space (kbytes)      :       3796
         Available space (kbytes) :     258348
         ID                       : 1699096636
         Device/File Name         : /ocfs/clusterware/ocr
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

 
添加OCR镜像
 

rac1-> /u01/oracle/product/10.2.0/crs_1/bin/ocrconfig -replace ocrmirror /dev/raw/raw8
PROT-20: Insufficient permission to proceed. Require privileged user
rac1-> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262144
         Used space (kbytes)      :       3796
         Available space (kbytes) :     258348
         ID                       : 1699096636
         Device/File Name         : /ocfs/clusterware/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw8
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

 
删除旧OCR

[root@rac1 ocfs2]# /u01/oracle/product/10.2.0/crs_1/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     524184
         Used space (kbytes)      :       3796
         Available space (kbytes) :     520388
         ID                       : 1699096636
         Device/File Name         : /dev/raw/raw8
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

至此迁移基本完成
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值