xtt.properties

Reduce Transportable Tablespace Downtime using Incremental Backups

(Doc ID 1389592.1)

Properties file for xttdriver.pl

Properties to set are the following:

tablespaces

platformid

srcdir

dstdir

srclink

dfcopydir

backupformat

stageondest

storageondest

backupondest

cnvinst_home

cnvinst_sid

asm_home

asm_sid

parallel

rollparallel

getfileparallel

metatransfer

destuser

desthost

desttmpdir

See documentation below and My Oracle Support Note 1389592.1 for details.

Tablespaces to transport

========================

tablespaces

-----------

Comma separated list of tablespaces to transport from source database

to destination database.

Specify tablespace names in CAPITAL letters.

tablespaces=TS1,TS2

Source database platform ID

===========================

platformid

----------

Source database platform id, obtained from V$DATABASE.PLATFORM_ID

platformid=2

Parameters required for Prepare Phase method dbms_file_transfer

===============================================================

srcdir

------

Directory object in the source database that defines where the source

datafiles currently reside.

Feb 2015: Ver2: We support multiple SOURCEDIR's.

NOTE: Number of entries in srcir and dstdir should match.

The srcdir to dstdir mapping can either be N:1 or N:N i.e. there can be

multiple source dirs and the files will be written to a single destination

directory, or files from a particular source dir can be written to a

particular destination directory

Example [N:1, allowed]

======================

srcdir=SRC1,SRC2

dstdir=DST

In this case the files from SRC1, SRC2 will be written to DST

Example [N:N, allowed]

======================

srcdir=SRC1,SRC2

dstdir=DST1,DST2

In this case the files from SRC1 will be written to DST1, SRC2 to DST2.

Example [N:M, not allowed]

==========================

srcdir=SRC1,SRC2,SRC3

dstdir=DST1,DST2

This is not allowed and will result in error.

srcdir=SOURCEDIR1,SOURCEDIR2

dstdir

------

Directory object in the destination database that defines where the

destination datafiles will be created.

Feb 2015: Ver2: We support multiple DESTDIR's.

SOURCEDIR1 will map to DESTDIR1 and SOURCEDIR2 to DESTDIR2 and so on

Refer to above parameter for more examples

dstdir=DESTDIR1,DESTDIR2

srclink

-------

Database link in the destination database that refers to the source

database. Datafiles will be transferred over this database link using

dbms_file_transfer.

srclink=TTSLINK

Source system file locations

============================

dfcopydir

---------

This parameter is used only when Prepare phase method is RMAN backup.

Location where datafile copies are created during the "-p prepare" step.

This location must have sufficient free space to hold copies of all

datafiles being transported.

This location may be an NFS-mounted filesystem that is shared with the

destination system, in which case it should reference the same NFS location

as the stageondest property for the destination system.

dfcopydir=/stage_source

backupformat

------------

Location where incremental backups are created.

This location may be an NFS-mounted filesystem that is shared with the

destination system, in which case it should reference the same NFS location

as the stageondest property for the destination system.

backupformat=/stage_source

Destination system file locations

=================================

stageondest

-----------

Location where datafile copies are placed by the user when they are

transferred manually from the souce system. This location must have

sufficient free space to hold copies of all datafiles being transported.

This is also the location from where datafiles copies and incremental

backups are read when they are converted in the "-c conversion of datafiles"

and "-r roll forward datafiles" steps.

This location may be a DBFS-mounted filesystem.

This location may be an NFS-mounted filesystem that is shared with the

source system, in which case it should reference the same NFS location

as the dfcopydir and backupformat properties for the source system.

stageondest=/stage_dest

storageondest

-------------

This parameter is used only when Prepare phase method is RMAN backup.

Location where the converted datafile copies will be written during the

"-c conversion of datafiles" step. This is the final location of the

datafiles where they will be used by the destination database.

storageondest=+DATA

backupondest

------------

Location where converted incremental backups on the destination system

will be written during the "-r roll forward datafiles" step.

NOTE: If this is set to an ASM location then define properties

asm_home and asm_sid below. If this is set to a file system

location, then comment out asm_home and asm_sid below

backupondest=+RECO

Database home and SID settings for destination system instances

===============================================================

cnvinst_home, cnvinst_sid

-------------------------

Database home and SID of the incremental convert instance that

runs on the destination system.

Only set these parameters if a separate incremental convert home is in use.

cnvinst_home=/u01/app/oracle/product/11.2.0.4/xtt_home

cnvinst_sid=xtt

asm_home, asm_sid

-----------------

Grid home and SID for the ASM instance that runs on the destination

system.

NOTE: If backupondest is set to a file system location, then comment out

both asm_home and asm_sid.

asm_home=/u01/app/11.2.0.4/grid

asm_sid=+ASM1

Parallel parameters

===================

parallel

--------

Parallel defines the channel parallelism used in copying (prepare phase),

converting.

Note: Incremental backup creation parallelism is defined by RMAN

configuration for DEVICE TYPE DISK PARALLELISM.

If undefined, default value is 8.

parallel=3

rollparallel

------------

Defines the level of parallelism for the -r roll forward operation.

If undefined, default value is 0 (serial roll forward).

rollparallel=2

getfileparallel

---------------

Defines the level of parallelism for the -G operation

If undefined, default value is 1. Max value supported is 8.

This will be enhanced in the future to support more than 8

depending on the destination system resources.

getfileparallel=4

metatransfer

---------------

If passwordless ssh is enabled between the source and the destination, the

script can automatically transfer the temporary files and the backups from

source to destination. Other parameters like desthost, desttmpdir needs to

be defined for this to work. destuser is optional

metatransfer=1

destuser

---------

The username that will be used for copying the files from source to dest

using scp. This is optional

destuser=username

desthost

--------

This will be the name of the destination host.

desthost=machinename

desttmpdir

---------------

This should be defined to same directory as TMPDIR for getting the

temporary files. The incremental backups will be copied to directory pointed

by stageondest parameter.

desttmpdir=/tmp

dumpdir

---------

The directory in which the dump file be restored to. If this is not specified

then TMPDIR is used.

dumpdir=/tmp

using scp. This is optional

destuser=username

END

适用于: Oracle Database Cloud Schema Service - 版本 N/A 和更高版本 Oracle Database Exadata Cloud Machine - 版本 N/A 和更高版本 Oracle Cloud Infrastructure - Database Service - 版本 N/A 和更高版本 Oracle Database Exadata Express Cloud Service - 版本 N/A 和更高版本 Oracle Database Backup Service - 版本 N/A 和更高版本 Linux x86-64 用途 注意: 考虑使用新release的版本V4的过程。 这个版本极大地简化了相关步骤。 请参考文档:V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1 本文档覆盖了在 12c 及更高版本上,使用跨平台传输表空间(XTTS)以及 RMAN 增量备份,以最小的应用停机时间,在不 同 endian 格式的系统间迁移数据的步骤。 第一步是从源系统拷贝一份 full backup 到目标系统。之后,使用一系列的增量备份(每一份都比前一份要小),这样在停 机前可以做到目标系统的数据和源系统“几乎”一致。需要停机的步骤只有最终的增量备份及元数据导出/导入。 这个文档描述了在 12c 下使用跨平台增量备份的步骤,关于 11g 下的步骤,请您参考 Note:1389592.1。 跨平台增量备份特性并不能减少 XTTS 的其它步骤花费的时间,比如元数据导出/导入。因此,如果数据库内有很多元数据 (DDL),比如 Oracle E-Business Suite 和其它打包程序,那么跨平台增量备份特性并不能带来很多好处;对于这样的 环境,迁移花的大部分时间是花在处理元数据上,而不是数据文件的转换及传输。 只有被迁移表空间里物理存储的数据库对象才会被拷贝至目标系统;如果要迁移存储在其它表空间的其它类型的对象 (比如存储在 SYSTEM 表空间内的 pl/sql 对象,sequences 等),你可以使用数据泵来拷贝这些对象至目标系统。 注意: 考虑使用新release的版本V4的过程。 这个版本极大地简化了相关步骤。 请参考文档:V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1 跨平台增量备份的主要步骤有: 1. 初始化设置 2. 准备阶段(源库数据仍然在线) 1. 备份要传输的表空间(0级备份) 2020/1/5 Document 2102859.1 https://myaccess.oraclevpn.com/+CSCO+1075676763663A2F2F7A6266727A632E68662E62656E7079722E70627A++/epmos/faces/Document… 3/14 2. 把备份及其它必须的文件发送到目标系统 3. 在目标系统恢复数据文件至目标端的 endian 格式 3. 前滚阶段(源库数据仍然在线 – 要重复这个阶段足够多次,使得目标数据文件拷贝和源库越相近越好) 1. 在源库创建增量备份 2. 把增量备份及其它必须的文件发送到目标系统 3. 把增量备份转换成目标系统的 endian 格式并且把增量备份应用至目标数据文件 4. 为下次增量备份确定 next_scn 5. 重复这些步骤直到已经准备好了操作传输表空间 NOTE: 在版本3,如果一个数据文件被加入到一个表空间或者一个新的表空间名字被加入到xtt.properties文件,会出现 一个Warning并且需要额外的处置 1. 传输阶段(此时源库数据需要置于 READ ONLY 模式) 1. 在源库端把表空间置为 READ ONLY 2. 最后一次执行前滚阶段的步骤 这个步骤会让目标系统的数据文件拷贝和源库数据文件完全一致并且产生必要导出文件。 在数据量非常大的情况下,这个步骤所花费的时间要显著的少于传统的 XTTS 方式,因为增量备份会很 小。 3. 使用数据泵把这个表空间的元数据导入至目标数据库 4. 把目标数据库的相关表空间置为 READ WRITE
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值