expdp&impdp参数说明

(一)概述

1. 服务端进程,备份在服务器,exp/imp与expdp/impdp文件不能混用。如果导入导出没权限授予以下角色:

DATAPUMP_EXP_FULL_DATABASE or the DATAPUMP_IMP_FULL_DATABASE 

2. 不要使用sysdba用户进行导入导出

Do not start export/impdp as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.

  1. 使用FULL=Y是不会导出包含SYS, ORDSYS以及MDSYS的schema对象的:

Several system schemas cannot be exported because they are not user schemas; they contain Oracle-managed data and metadata. Examples of system schemas that are not exported include SYS, ORDSYS, and MDSYS.

另外sys schma对象的授权,AWR,XDB repository不会导出

Grants on objects owned by the SYS schema are never exported.

The Automatic Workload Repository (AWR) is not moved in a full database export and import operation.

The XDB repository is not moved in a full database export and import operation. User created XML schemas are moved.

另外注意full=y导入时可能需要重设置SYS密码

the import operation attempts to copy the password for the SYS account from the source database. This sometimes fails (for example, if the password is in a shared password file). If it does fail, then after the import completes, you must set the password for the SYS account at the target database to a password of your choice.

  1. 使用schema导出除了导出模式对象还会导出schema相关的非模式对象,包括用户定义以及所有系统和角色的授权,用户密码历史记录等信息,可以在导入时再创建

The DATAPUMP_EXP_FULL_DATABASE role also allows you to export additional nonschema object information for each specified schema so that the schemas can be re-created at import time. This additional information includes the user definitions themselves and all associated system and role grants, user password history, and so on.

使用schema方式导出时不能导出SYS schema

The SYS schema cannot be used as a source schema for export jobs.

  1. 以TABLES=[schema_name.]table_name[:partition_name] [, ...]或传输表空间方式导出时如果表使用了自定义类型,type的定义是不会被导出的

Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time.

Tables选项可以使用通配符%,但%不能用于分区表,另外表名将会转为大写,如果有小写字母的表名需要使用双引号,另外命令行还需要使用转义,如TABLES='\"Emp\"'; 如果表名有#,则同需要双引号,如'\"Emp#\"', 使用parfile格式一样只是不用转义\

The export of tables that include a wildcard character, %, in the table name is not supported if the table has partitions.

  1. 以tablespace方式导出会导出对象所有依赖对象,它不象传输表空间,不需要自包,如果表空间含有权限不够的对象,则不会被导出

Privileged users get all tables. Unprivileged users get only the tables in their own schemas.

  1. 传输表空间导出任务不能停止,不能使用并行,目标库必须等于或大于源库版本

Transportable tablespace exports cannot be restarted once stopped. Also, they cannot have a degree of parallelism greater than 1.You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or later release level as the source database.

  1. 命令行参数中的导出对象会被修改为大写,如果需要小写要用双引号,如table="hr.employees"

Oracle Data Pump by default changes values entered as lowercase or mixed-case into uppercase. For example, if you enter TABLE=hr.employees, then it is changed to TABLE=HR.EMPLOYEES.

另外一些OS可能需要对双引号转义,不同系统可能不同,如果以下设置仍不行直接使用参数文件

TABLES = \"MixedCaseTableName\"

TABLES = '\"MixedCaseTableName\"'

  1. 导入时可以不用指定导入模式,如果不指定则使用导出的模式导入dumpfile的全部内容

When the source of the import operation is a dump file set, specifying a mode is optional. If no mode is specified, then Import attempts to load the entire dump file set in the mode in which the export operation was run.

impdp导出的模式可以与expdp的模式不同,如impdp使用导入schema, 但dumpfile可以为full, schema,table,tablespace导出的文件。导入时自动不导入没权限的对象

  1. 不会导入有唯一索引被disabled的表,必须在导入前把表drop或是重新enable

Data Pump does not load tables with disabled unique indexes. To load data into the table, the indexes must be either dropped or reenabled.

  1. expdp/impdp支持exp/imp的一些参数作向后兼容,不建议使用
  2. 导入导出性能
  1. 通过PARALLEL 参数,这个参数不要超过2倍的逻辑CPU数,越高的parallel意味着更多的CPU, 内存及IO消耗,所以你还要确定根据CPU设置parallel后是否内存和IO成为瓶颈,
  2. 如果使用network_link要考虑是否使用压缩来提高性能
  3. 在12.1版本之前会导出大量的统计数据,这会在导入时占用大量的内存,所以导出或导入时可以使用EXCLUDE=STATISTICS,在导入后使用dbms_stats产生统计信息

可以使用以下数据库参数增加性能:

DISK_ASYNCH_IO=TRUE

DB_BLOCK_CHECKING=FALSE

DB_BLOCK_CHECKSUM=FALSE

把下面参数设置足够大以保证最大并发

PROCESSES

SESSIONS

PARALLEL_MAX_SERVERS

  • 进程相关

Data Pump jobs由一个master table, 一个master process以及多个worker processes完成

master process:名为<instance>DMnn_<pid>

worker processes:名为<instance>DWnn_<pid>, 实际执行导入和导出数据的工作进程

expdp,impdp进程:client process

  1. Master process用于控制整个job, 包括与客户端的通信,创建、控制work processes pool, 以及记录日志
  2. Master table 用于在数据及元信息在传输时,记录任务进度。Master table会在连接用户的schema下创建,名字就是job_name, 因此用户要有create table以及使用表空间的权限,给job_name起名时不要与已存在表或视图重名
  1. 对于export jobs,master table记录了对象的dumpfile中的位置,在export job执行完会将master table内容写入dumpfile文件中

For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.

  1. 对于import jobs, master table是从dumpfile中导入,用于控制导出对象的顺序

For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.

  1. 在master table中的信息都可用于restart a job(除了不能中止/restart的job,如transportable导出)
  2. 默认在job正常完成后会drop掉master table, 也可以通过选项keep_master=yes来保留,另外在job失败退出时,master table会保留, 你可以手动删除并重新执行。

如果在交互命令中使用STOP_JOB,则会保留master table用于之后restart

如果在交互命令中使用KILL_JOB,则会drop master table

如果job还没开始进行复制就被停了,则master table会被dropped

  1. Data Pump可以使用多个worker processes来并发执行,使用PARALLEL选项设置并发度 

对于导出,除了传输表空间导出,其它导出包括元信息及数据均可并发导出;对于导入,对象必须按正确的依赖顺序导入

If there are enough objects of the same type to make use of multiple workers, then the objects will be imported by multiple worker processes. Some metadata objects have interdependencies which require one worker process to create them serially to satisfy those dependencies. Worker processes are created as needed until the number of worker processes equals the value supplied for the PARALLEL command-line parameter.

导出参数说明

Metadata Filters对应参数exclude与include, 可用于导入导出,不能同时使用

  1. Exclude=object_type[:name_clause][,..]

通过下面视图查看可以使用的object types, 视图中的object_path即为object type

DATABASE_EXPORT_OBJECTS for full mode

SCHEMA_EXPORT_OBJECTS for schema mode

TABLE_EXPORT_OBJECTS for table and tablespace mode.

set linesize 200

col OBJECT_PATH format a30

col COMMENTS format a120

col name format a1

SELECT * FROM SCHEMA_EXPORT_OBJECTS;

OBJECT_PATH               COMMENTS                                           N

-------------------------------------------------------------------------------PROCDEPOBJ_GRANT        Grants on instance procedural objects

PROCEDURE                 Procedures and their dependent grants and audits   Y

PROCEDURE/ALTER_PROCEDURE Recompile procedures

PROCOBJ    Procedural objects in the selected schemas     Y

PROCOBJ_AUDIT            Schema procedural object audits in the selected schemas

...

name_clause 是可选的,不指定则排除此object types下所有object. 只有在object type中有names才能指定name_clause(for example, it is applicable to TABLE, but not to GRANT), 查询:Select * from schema_export_objects where named='Y';

name_clause中可以包含SQL比较操作符,使用逗号分隔,使用双引号引起name clause,使用单引号引起name字符串,如:EXCLUDE=INDEX:"LIKE 'EMP%'",另外如果表示引号可能需要使用转义,不同系统转义方法可能不同,可以使用parameter file, 这样就不需要转义了

注name必须大小写完全匹配:

For example, if the name_clause you supply is for a table named EMPLOYEES, then there must be an existing table named EMPLOYEES using all upper case. If the name_clause were supplied as Employees or employees or any other variation, then the table would not be found.

特别注意:

  1. Specifying EXCLUDE=CONSTRAINT excludes all constraints, except for any constraints needed for successful table creation and loading. for example, primary key constraints for index-organized tables, or REFSCOPE and WITH ROWID constraints for tables with REF columns
  2. Specifying EXCLUDE=GRANT excludes object grants on all object types and system privilege grants.
  3. Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas. To exclude a specific user and all objects of that user use exclude=schema

Example

Exclude=index,constraint,statistics,view,package,function

Exclude=table:"like 'EMP%'","'T1'","in ('employees','address')",schema:"'hr'"  

  1. INCLUDE = object_type[:name_clause] [, ...]

使用方法完全同exclude

$ expdp hr INCLUDE=TABLE DUMPFILE=dpump_dir1:exp_inc.dmp NOLOGFILE=YES

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Data Filters有两个参数query与sample:

使用data filter参数会导致不会走direct path导出

Data filtering can also occur indirectly because of metadata filtering, which can include or exclude table objects along with any associated row data.

Each data filter can be specified once per table within a job. If different filters using the same name are applied to both a particular table and to the whole job, then the filter parameter supplied for the specific table takes precedence.

  1. QUERY = [schema.][table_name:] query_clause

query_clause可以是where语句,也可以其它SQL子句,如使用order by子句从heap-organized table导出数据,在导入index-organized table时速度会快很多

如果不指定schema.table_name,则对导出所有表有效,表级别的query会覆盖应用所有表的query. 一张表只能有一个query生效

如果同时使用NETWORK_LINK与Query参数,则Query中的对象需要使用NETWORK_LINK value进行限定,否则会被认为是本地节点,如:QUERY=(hr.employees:"WHERE last_name IN(SELECT last_name FROM hr.employees@dblink1)")

同样在命令行使用引号可能根据OS不同需要转义,如果转义不了建议使用parfile选项

Example

$ expdp scott/tiger DIRECTORY=MY_DIR  DUMPFILE=tab.dmp TABLES=stu,address  query=stu:\" where sno>1 \",address:\" where sno>10 \"

Restrictions

  1. When the QUERY parameter is specified for a table, Data Pump uses external tables to unload the target table. External tables uses a SQL CREATE TABLE AS SELECT statement. The value of the QUERY parameter is the WHERE clause in the SELECT portion of the CREATE TABLE statement.

如果query中使用的列与导出表列相同,使用KU$ 来表示导出表

  1. If the QUERY parameter includes references to another table with columns whose names match the table being unloaded, and if those columns are used in the query, then you will need to use a table alias to distinguish between columns in the table being unloaded and columns in the SELECT statement with the same name. The table alias used by Data Pump for the table being unloaded is KU$. 

For example, suppose you want to export a subset of the sh.sales table based on the credit limit for a customer in the sh.customers table. In the following example, KU$ is used to qualify the cust_id field in the QUERY parameter for unloading sh.sales. As a result, Data Pump exports only rows for customers whose credit limit is greater than $10,000.

QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c

WHERE cust_credit_limit > 10000 AND ku$.cust_id = c.cust_id)"'

If, as in the following query, KU$ is not used for a table alias, then the result will be that all rows are unloaded:

QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c

   WHERE cust_credit_limit > 10000 AND cust_id = c.cust_id)"'

  1. SAMPLE=[[schema_name.]table_name:]sample_percent

只导出百分比的行数,指定范围从0.000001到100但不包含100,只用于expdp  

如果不指定table则对导出所有表均生效

You can use this parameter with the Data Pump Import PCTSPACE transform, so that the size of storage allocations matches the sampled data subset.

注:此参数不能同时使用NETWORK_LINK参数

$ expdp hr DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  1. LOGFILE=logname

LOGFILE用于指定日志文件,如果不指定也会产生日志,日志文件默认名为export.log,如果已存在则直接覆盖。使用NOLOGFILE =YES则不会输出日志,但仍然会输出到标准输出中

另外注意如果你使用ASM存储一定要把日志输出到磁盘上而不是ASM存储中

Data Pump Export writes the log file using the database character set. If your client NLS_LANG environment setting sets up a different client character set from the database character set, then it is possible that table names may be different in the log file than they are when displayed on the client output screen.

  1. LOGTIME=[NONE默认| STATUS | LOGFILE | ALL]

You can use the timestamps to figure out the elapsed time between different phases of a Data Pump operation. Such information can be helpful in diagnosing performance problems and estimating the timing of future similar operations.

NONE : No timestamps on status or log file messages (same as default)

STATUS : Timestamps on status messages only

LOGFILE : Timestamps on log file messages only

ALL : Timestamps on both status and log file messages

  1. METRICS=[YES |NO默认] 

日志是否输出额外信息,如果设置yes, 则会输出对象的个数以及消耗的时间

  1. STATUS=[integer]

If you supply a value for integer, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, then no additional information is displayed beyond information about the completion of each object type, table, or partition.

This status information is written only to your standard output device, not to the log file.

Example

$ expdp hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  1. ESTIMATE=[blocks默认|statistic]

是评估导出的每个表占用的磁盘空间大小(单位bytes)而不是导出需要多少时间, 但可以通过此值来评估任务完成百分比来预估时间

BLOCKS - 通过源数据库的blocks数量*块大小进行计算

STATISTICS - 通过表的统计信息计算,因此要准备先要把导出表分析一遍

注:如果导出表有压缩表,则使用blocks进行估算是不准的,需要使用statistics;另外导出如果使用query或remap_data选项估算结果也不准

$ expdp hr TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1 DUMPFILE=estimate_stat.dmp

  1. ESTIMATE_ONLY=[YES | NO]

并不实际进行导出,它与query选项冲突

$ expdp hr ESTIMATE_ONLY=YES NOLOGFILE=YES SCHEMAS=HR

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  1. CLUSTER=[YES默认| NO]

$ expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_clus%U.dmp CLUSTER=NO PARALLEL=3

用于决定RAC中哪个实例来启动work process, 注master process还是在执行节点上的

注:如果使用RAC,目录一定要创建在共享存储中以保证所有实例都可以使用。但如果有特殊需要可以使用CLUSTER=NO来将所有工作进程限制为启动Data Pump的实例上运行。

如果不指定CLUSER及SERVICE_NAME,RAC环境Data Pump并不会控制工作进程在哪个实例运行,这时就需要目录创建在共享存储上

1)To force Data Pump Export to use only the instance where the job is started and to replicate pre-Oracle Database 11g release 2 (11.2) behavior, specify CLUSTER=NO.

2)To specify a specific, existing service and constrain worker processes to run only on instances defined for that service, use the SERVICE_NAME parameter with the CLUSTER=YES parameter.

3)使用CLUSTER参数可能会影响性能,因为在Oracle RAC实例之间分配导出作业会有一些额外的开销. 如果导出数据小优选CLUSTER=NO,否则使用CLUSTER=YES

Use of the CLUSTER parameter may affect performance because there is some additional overhead in distributing the export job across Oracle RAC instances. For small jobs, it may be better to specify CLUSTER=NO to constrain the job to run on the instance where it is started. Jobs whose performance benefits the most from using the CLUSTER parameter are those involving large amounts of data.

  1. SERVICE_NAME=name

用于与CLUSTER=YES连用,指定work process运行在哪个service name

The SERVICE_NAME parameter is ignored if CLUSTER=NO is also specified.

Suppose you have an Oracle RAC configuration containing instances A, B, C, and D. Also suppose that a service named my_service exists with a resource group consisting of instances A, B, and C only.

In such a scenario, the following would be true:

  1. If you start a Data Pump job on instance A and specify CLUSTER=YES and you do not specify the SERVICE_NAME parameter, then Data Pump creates workers on all instances: A, B, C, and D, depending on the degree of parallelism specified.
  2. If you start a Data Pump job on instance A and specify CLUSTER=YES and SERVICE_NAME=my_service, then workers can be started on instances A, B, and C only.
  3. If you start a Data Pump job on instance D and specify CLUSTER=YES and SERVICE_NAME=my_service, then workers can be started on instances A, B, C, and D. Even though instance D is not in my_service it is included because it is the instance on which the job was started.
  4. If you start a Data Pump job on instance A and specify CLUSTER=NO, then any SERVICE_NAME parameter you specify is ignored and all processes will start on instance A.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

VERSION=[COMPATIBLE 默认| LATEST | version_string]

只导出与version指定版本兼容的对象

Only database objects and attributes that are compatible with the specified release will be exported. Database objects or attributes that are incompatible with the release specified for VERSION will not be exported.

目标库比源库版本高一般不用指定VERSION,一个特殊情况是11.2.0.3及之后使用FULL=Y导出并导入到12c库,这时必须显式指定VERSION=12

In an upgrade situation, when the target release of a Data Pump-based migration is higher than the source, the VERSION parameter typically does not have to be specified because all objects in the source database will be compatible with the higher target release. An exception is when an entire Oracle Database 11g (release 11.2.0.3 or higher) is exported in preparation for importing into Oracle Database 12c Release 1 (12.1.0.1) or later. In this case, explicitly specify VERSION=12 in conjunction with FULL=YES in order to include a complete set of Oracle internal component metadata.

如果要导入低版本,在导出时需要显示式指定Version

In a downgrade situation, when the target release of a Data Pump-based migration is lower than the source, the VERSION parameter should be explicitly specified to be the same version as the target. An exception is when the target release version is the same as the value of the COMPATIBLE initialization parameter on the source system; then VERSION does not need to be specified.

  1. COMPATIBLE - This is the default value. The version of the metadata corresponds to the database compatibility level as specified on the COMPATIBLE initialization parameter. Database compatibility must be set to 9.2 or later.
  2. LATEST - The version of the metadata and resulting SQL DDL corresponds to the database release regardless of its compatibility level.
  3. version_string - A specific database release (for example, 11.2.0). In Oracle Database 11g, this value cannot be lower than 9.2.

Restrictions

  1. Exporting a table with archived LOBs to a database release earlier than 11.2 is not allowed.
  2. If the Data Pump Export VERSION parameter is specified along with the TRANSPORT_TABLESPACES parameter, then the value must be equal to or greater than the Oracle Database COMPATIBLE initialization parameter.
  3. If the Data Pump VERSION parameter is specified as any value earlier than 12.1, then the Data Pump dump file excludes any tables that contain VARCHAR2 or NVARCHAR2 columns longer than 4000 bytes and any RAW columns longer than 2000 bytes.
  4. Database privileges that are valid only in Oracle Database 12c Release 1 (12.1.0.2) and later (for example, the READ privilege on tables, views, materialized views, and synonyms) cannot be imported into Oracle Database 12c Release 1 (12.1.0.1) or earlier. If an attempt is made to do so, then Import reports it as an error and continues the import operation.
  5. If you specify a database release that is older than the current database release, then certain features and data types may be unavailable. For example, specifying VERSION=10.1 causes an error if data compression is also specified for the job because compression was not supported in Oracle Database 10g release 1 (10.1). Another example would be if a user-defined type or Oracle-supplied type in the source database is a later version than the type in the target database, then it will not be loaded because it does not match any version of the type in the target database.
  6. When operating across a network link, Data Pump requires that the source and target databases differ by no more than two versions. For example, if one database is Oracle Database 12c, then the other database must be 12c, 11g, or 10g. Note that Data Pump checks only the major version number (for example, 10g,11g, 12c), not specific release numbers (for example, 12.2, 12.1, 11.1, 11.2, 10.1, or 10.2).
  7. Importing Oracle Database 11g dump files that contain table statistics into Oracle Database 12c Release 1 (12.1) or later may result in an Oracle ORA-39346 error. This is because Oracle Database 11g dump files contain table statistics as metadata, whereas Oracle Database 12c Release 1 (12.1) and later expect table statistics to be presented as table data. The workaround is to ignore the error and after the import operation completes, regather table statistics.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

NETWORK_LINK=source_database_link

连接的数据库通过dblink到远程source数据库取数据,并把数据保存到expdp连接的数据库的DUMPFILE中. 另外IMPDP如果使用network_link是表示把source数据库数据直接导入到连接数据库中,中间不产生DUMPFILE

注:只有上面两种用法,没有其它用法,如impdp不能通过network把数据导入到远程数据库。

$ expdp oracle/oracle TABLES=T1 DUMPFILE=1.dmp DIRECTORY=BAK_DIR NETWORK_LINK=remotebak

这里DIRECTORY, DUMPFILE均在expdp连接的数据库上. 下面为不产生中间文件直接导入:

$ impdp oracle/oracle TABLES=T1 NETWORK_LINK=remotebak TABLE_EXISTS_ACTION=APPEND

这里会将远程T1表导入到本地T1

Restrictions

  1. The following types of database links are supported for use with Data Pump Export:

Public fixed user

Public connected user

Public shared user (only when used by link owner)

Private shared user (only when used by link owner)

Private fixed user (only when used by link owner)

  1. The following types of database links are not supported for use with Data Pump Export:

Private connected user

Current user

  1. When operating across a network link, Data Pump requires that the source and target databases differ by no more than two versions. For example, if one database is Oracle Database 12c, then the other database must be 12c, 11g, or 10g. Note that Data Pump checks only the major version number (for example, 10g, 11g, 12c), not specific release numbers (for example, 12.1, 12.2, 11.1, 11.2, 10.1 or 10.2).
  2. When transporting a database over the network using full transportable export, auditing cannot be enabled for tables stored in an administrative tablespace (such as SYSTEM and SYSAUX) if the audit trail information itself is stored in a user-defined tablespace.
  3. Metadata cannot be imported in parallel when the NETWORK_LINK parameter is also used
  4. If an export operation is performed over an unencrypted network link, then all data is exported as clear text even if it is encrypted in the database.
  5. If the source database is read-only, then the user on the source database must have a locally managed temporary tablespace assigned as the default temporary tablespace. Otherwise, the job will fail.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

PARALLEL=integer

Specifies the maximum number of processes of active execution operating on behalf of the export job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process and worker processes acting as query coordinators in parallel query operations do not count toward this total.

The value you specify for integer should be less than, or equal to, the number of files in the dump file set (or you should specify either the %U or %Lsubstitution variables in the dump file specifications).

Using PARALLEL During An Export In An Oracle RAC Environment

In an Oracle Real Application Clusters (Oracle RAC) environment, if an export operation has PARALLEL=1, then all Data Pump processes reside on the instance where the job is started. Therefore, the directory object can point to local storage for that instance.

If the export operation has PARALLEL set to a value greater than 1, then Data Pump processes can reside on instances other than the one where the job was started. Therefore, the directory object must point to shared storage that is accessible by all instances of the Oracle RAC.

Restrictions

  1. Transportable tablespace metadata cannot be exported in parallel.
  2. Metadata cannot be exported in parallel when the NETWORK_LINK parameter is also used
  3. The following objects cannot be exported in parallel:

TRIGGER

VIEW

OBJECT_GRANT

SEQUENCE

CONSTRAINT

REF_CONSTRAINT

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE | CONVENTIONAL| INSERT_AS_SELECT]

指定导出使用的默认方法,可以用于NETWORK_LINK导出,但不能用于传输表空间导出中

如果有表不能用上面指定的方法导出则会报错,然后继续导出下个对象

AUTOMATIC:Data Pump determines the best way to unload data for each table. Oracle recommends that you use AUTOMATIC whenever possible because it allows Data Pump to automatically select the most efficient method.

DIRECT_PATH — Data Pump uses direct path unload for every table.

EXTERNAL_TABLE — Data Pump uses a SQL CREATE TABLE AS SELECT statement to create an external table using data that is stored in the dump file. The SELECT clause reads from the table to be unloaded.

INSERT_AS_SELECT — Data Pump executes a SQL INSERT AS SELECT statement to unload data from a remote database. This option is only available for network mode exports.

$ expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr ACCESS_METHOD=EXTERNAL_TABLE

关于AUTOMATIC,在不指定时默认按下面顺序使用:

direct path--external tables--Conventional Path(只impdp有)--INSERT_AS_SELECT

其中transportable与network_link必须指定参数启用

Network_link导入时要求版本必须不低于12.2才能使用access_method,另外只能使用AUTOMATIC, DIRECT_PATH与INSERT_AS_SELECT;transport_tablespace选项导入时不支持access_method选项。

  1. Direct path

The dump file storage format is the internal stream format of the direct path API. This format is very similar to the format stored in Oracle database data files inside of tablespaces. Therefore, no client-side conversion to INSERT statement bind variables is performed.

The supported data access methods, direct path and external tables, are faster than conventional SQL. The direct path API provides the fastest single-stream performance. The external tables feature makes efficient use of the parallel queries and parallel DML capabilities of the Oracle database.

Impdp不能使用direct path的情况:

If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to load the data for that table:

  1. A domain index that is not a CONTEXT type index exists for a LOB column.
  2. A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
  3. A table is in a cluster.
  4. There is an active trigger on a preexisting table.
  5. Fine-grained access control is enabled in insert mode on a preexisting table.
  6. A table contains BFILE columns or columns of opaque types.
  7. A referential integrity constraint is present on a preexisting table.
  8. A table contains VARRAY columns with an embedded opaque type.
  9. The table has encrypted columns.
  10. The table into which data is being imported is a preexisting table and at least one of the following conditions exists:

There is an active trigger

The table is partitioned

Fine-grained access control is in insert mode

A referential integrity constraint exists

A unique index exists

  1. Supplemental logging is enabled and the table has at least one LOB column.
  2. The Data Pump command for the specified table used the QUERY, SAMPLE, or REMAP_DATA parameter.
  3. A table contains a column (including a VARRAY column) with a TIMESTAMP WITH TIME ZONE data type and the version of the time zone data file is different between the export and import systems.

Expdp不能使用direct path的情况

If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to unload the data:

  1. Fine-grained access control for SELECT is enabled.
  2. The table is a queue table.
  3. The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.
  4. The table contains encrypted columns.
  5. The table contains a column of an evolved type that needs upgrading.
  6. The Data Pump command for the specified table used the QUERY, SAMPLE, or REMAP_DATA parameter.
  7. Prior to the unload operation, the table was altered to contain a column that is NOT NULL and also has a default value specified.

  1. external tables mechanism

当不能使用direct path,这时外部表的机制就会被使用

When data file copying is not selected and the data cannot be moved using direct path, the external tables mechanism is used.

The external tables mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data.

For very large tables and partitions, single worker processes can choose intrapartition parallelism through multiple parallel queries and parallel DML I/O server processes when the external tables method is used to access data.

在以下情况会使用external tables:

  1. Loading and unloading very large tables and partitions in situations where it is advantageous to use parallel SQL capabilities
  2. Loading tables with global or domain indexes defined on them, including partitioned object tables
  3. Loading tables with active triggers or clustered tables
  4. Loading and unloading tables with encrypted columns
  5. Loading tables with fine-grained access control enabled for inserts
  6. Loading a table not created by the import operation (the table exists before the import starts)

Note: 这个外部表并非指oracle创建的外部表,它是使用oracle_datapump访问驱动

When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL statement.

  1. CONVENTIONAL

只用于impdp: In situations where there are conflicting table attributes, Data Pump is not able to load data into a table using either direct path or external tables. In such cases, conventional path is used, which can affect performance.

CONVENTIONAL — Data Pump creates an external table over the data stored in the dump file and reads rows from the external table one at a time. Every time it reads a row Data Pump executes an insert statement to load that row into the target table. This method takes a long time to load data, but it is the only way to load data that cannot be loaded by direct path and external tables.

  1. INSERT_AS_SELECT

使用insert_as_select语句进行导入导出,只用于network_link, 默认认优先使用direct path(在12.2.0.1以前版本默认是insert select语句), 如果不能使用则转用insert as select语句导入到目标库

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

CONTENT=[ALL默认|DATA_ONLY|METADATA_ONLY]    

如果导出或导入时使用metadata_only,则索引或表的统计信息在导入后会被锁

Be aware that if you specify CONTENT=METADATA_ONLY,then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.

REUSE_DUMPFILES=[YES | NO默认]

默认导出文件已存在时会报错,开启后会直接覆盖

FILESIZE=integer[B默认| KB | MB | GB | TB]

指定每个转储文件的大小,如果导出超出这个值,会停止报错

$ expdp system/system directory=MY_DIR dumpfile=3.dmp filesize=500 tables=scott

PARFILE=[directory_path]file_name

这里注意配置文件不使用DIRECTORY参数值,它是expdp客户端程序使用,默认为当前目录下文件

配置文件中每个参数写一行,如果需要换行在第一行尾使用\

$ vi exp1.txt

DIRECTORY=MY_DIR

DUMPFILE=tab.dmp

TABLES=dept,emp

$ expdp scott/tiger parfile=exp1.txt

JOB_NAME=jobname_string

默认任务名为SYS_EXPORT_<mode>_NN,NN是一个2位数,从01开始增长

The job name is used as the name of the master table, which controls the export job.

KEEP_MASTER=[YES | NO默认]

如果job正常结束是否仍保留master table

注: 如果任务未正常结束则会自动保留master table

DATA_OPTIONS= [XML_CLOBS | GROUP_PARTITION_TABLE_DATA | VERIFY_STREAM_FORMAT]

导入也有DATA_OPTIONS,但可用选项完全不一样

导出的DATA_OPTIONS没有默认值,且DATA_OPTIONS parameter requires the job version to be set to 11.0.0 or later.

expdp hr TABLES=hr.xdb_tab1 DIRECTORY=dpump_dir1 DUMPFILE=hr_xml.dmp VERSION=11.2 DATA_OPTIONS=XML_CLOBS

  1. XML_CLOBS — specifies that XMLType columns are to be exported in uncompressed CLOB format regardless of the XMLType storage format that was defined for them.

XMLType stored as CLOB is deprecated as of Oracle Database 12c Release 1 (12.1). XMLType tables and columns are now stored as binary XML.

If a table has XMLType columns stored only in CLOB format, then it is not necessary to specify the XML_CLOBS option because Data Pump automatically exports them in CLOB format. If a table has XMLType columns stored as any combination of object-relational (schema-based), binary, or CLOB formats, then Data Pump exports them in compressed format, by default. This is the preferred method. However, if you need to export the data in uncompressed CLOB format, you can use the XML_CLOBS option to override the default.

Using the XML_CLOBS option requires that the same XML schema be used at both export and import time.

2) GROUP_PARTITION_TABLE_DATA — tells Data Pump to unload all table data in one operation rather than unload each table partition as a separate operation. As a result, the definition of the table will not matter at import time because Import will see one partition of data that will be loaded into the entire table.

3) VERIFY_STREAM_FORMAT — validates the format of a data stream before it is written to the Data Pump dump file. The verification checks for a valid format for the stream after it is generated but before it is written to disk. This assures that there are no errors when the dump file is created, which in turn helps to assure that there will not be errors when the stream is read at import time.

VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], ...

这个选项会以未加密的格式导出视图数据并创建为未加密的表

The VIEWS_AS_TABLES parameter unloads view data in unencrypted format and creates an unencrypted table.Data Pump also exports objects dependent on the view, such as grants and constraints. Dependent objects that do not apply to tables (for example, grants of the UNDER object privilege) are not exported.

table_name: The name of a table to serve as the source of the metadata for the exported view. By default Data Pump automatically creates a temporary "template table" with the same columns and data types as the view, but no rows. If the database is read-only, then this default creation of a template table will fail. In such a case, you can specify a table name. The table must be in the same schema as the view. It must be a non-partitioned relational table with heap organization. It cannot be a nested table.

Template tables are automatically dropped after the export operation is completed. While they exist, you can perform the following query to view their names (which all begin with KU$VAT):

If the export job contains multiple views with explicitly specified template tables, the template tables must all be different.

SQL> SELECT * FROM user_tab_comments WHERE table_name LIKE 'KU$VAT%';

TABLE_NAME     TABLE_TYPE     COMMENTS

-----------------------------------------------------

KU$VAT_63629    TABLE          Data Pump metadata template table for view SCOTT.EMPV

Example

$ expdp scott/tiger views_as_tables=view1 directory=data_pump_dir dumpfile=scott1.dmp

Restrictions

  1. The VIEWS_AS_TABLES parameter cannot be used with the TRANSPORTABLE=ALWAYS parameter.
  2. Tables created using the VIEWS_AS_TABLES parameter do not contain any hidden or invisible columns that were part of the specified view.
  3. The VIEWS_AS_TABLES parameter does not support tables that have columns with a data type of LONG.

导入参数说明

与expdp相同参数包括help, full, schemas, tables,  attach, cluster, service_name, directory, dumpfile, estimate(没有estimate_only), exclude, include, keep_master, logfile, logtime, metrics, nologfile, status, parfile

job_name(默认任务名SYS_<IMPORT or SQLFILE>_<mode>_NN)

flashback_scn, flashback_time(这两个用法完全同expdp,但只用于network_link),

content(注content=all或conntent=data_only与sqlfile冲突),

Query(使用query选项不会走direct path,而是使用external table. When the QUERY parameter is specified for a table, Data Pump uses external tables to load the target table. External tables uses a SQL INSERT statement with a SELECT clause. The value of the QUERY parameter is included in the WHERE clause of the SELECT portion of the INSERT statement.)

ACCESS_METHOD=[AUTOMATIC默认| DIRECT_PATH | EXTERNAL_TABLE | CONVENTIONAL_PATH | INSERT_AS_SELECT]

比expdp多了conventional_path,详见expdp的说明

DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS| ENABLE_NETWORK_COMPRESSION | REJECT_ROWS_WITH_REPL_CHAR |

TRUST_EXISTING_TABLE_PARTITIONS | VALIDATE_TABLE_DATA]

  1. DISABLE_APPEND_HINT 

默认在insert时是使用append hint的,但如果导入表已存在且很小,当前有应用在并发访问此表,可以禁用append导入

  1. SKIP_CONSTRAINT_ERRORS 

出现约束报错会记录日志并仍进行导入,而不是对报错对象进行回滚。注意如果导入表有唯一索引或唯一约束,同时你使用了skip_constraint_errors,这时insert不会以append方式插入,因此导入会变慢很多。

如果没有使用external table访问方法,即使指定了skip_constraint_errors它也不会生效。

The SKIP_CONSTRAINT_ERRORS option specifies that you want the import operation to proceed even if non-deferred constraint violations are encountered.It has no effect on the load if deferred constraint violations are encountered. Deferred constraint violations always cause the entire load to be rolled back.

It logs any rows that cause non-deferred constraint violations, but does not stop the load for the data object experiencing the violation.If SKIP_CONSTRAINT_ERRORS is not set, then the default behavior is to roll back the entire load of the data object on which non-deferred constraint violations are encountered.

  1. ENABLE_NETWORK_COMPRESSION 

用于使用network_link且access_method设置为direct_path, 设置后会在远端节点压缩再通过网络传到本地,最后再解压,用于网络很差的情况。另外如果远端数据库版本低于12.2版本都是会忽略此选项的

If ACCESS_METHOD=AUTOMATIC and Data Pump decides to use DIRECT_PATH for a network import, then ENABLE_NETWORK_COMPRESSIONwould also apply.

  1. REJECT_ROWS_WITH_REPL_CHAR

导入数据库字符集非导入数据的超集,这时默认会使用默认字符把不能转化字符替代

specifies that you want the import operation to reject any rows that experience data loss because the default replacement character was used during character set conversion.

If REJECT_ROWS_WITH_REPL_CHAR is not set, then the default behavior is to load the converted rows with replacement characters.

  1. TRUST_EXISTING_TABLE_PARTITIONS

分区表直接并发导入到已存在的表中,用于在你已经创建了源库一样的分区表,子分区完全一致,走接按子表名并发插入。注意它要求分区的属性及分区的名子必须与源库一致。

tells Data Pump to load partition data in parallel into existing tables. You should use this option when you are using Data Pump to create the table from the definition in the export database before the table data import is started.

If you use this option and if other attributes of the database are the same (for example, character set), then the data from the export database goes to the same partitions in the import database.

  1. VALIDATE_TABLE_DATA

directs Data Pump to validate the number and date data types in table data columns. An ORA-39376 error is written to the .log file if invalid data is encountered. The error text includes the column name. The default is to do no validation. Use this option if the source of the Data Pump dump file is not trusted.

PARALLEL=integer

如果使用并行且dumpfile只有一个,则多进程可同时并发从这个文件读取,但性能可能受IO限制

If the source of the import is a dump file set consisting of files, then multiple processes can read from the same file, but performance may be limited by I/O contention.

对于使用NETWORK_LINK的并行:

To understand the effect of the PARALLEL parameter during a network import mode, it is important to understand the concept of "table_data objects" as defined by Data Pump. When Data Pump moves data, it considers the following items to be individual "table_data objects":

a complete table (one that is not partitioned or subpartitioned)

partitions, if the table is partitioned but not subpartitioned

subpartitions, if the table is subpartitioned

For example:

A nonpartitioned table, scott.non_part_table, has 1 table_data object:

scott.non_part_table

A partitioned table, scott.part_table (having partition p1 and partition p2), has 2 table_data objects:

scott.part_table:p1

scott.part_table:p2

A subpartitioned table, scott.sub_part_table (having partition p1 and p2, and subpartitions p1s1, p1s2, p2s1, and p2s2) has 4 table_data objects:

scott.sub_part_table:p1s1

scott.sub_part_table:p1s2

scott.sub_part_table:p2s1

scott.sub_part_table:p2s2

During a network mode import, each table_data object is assigned its own worker process, up to the value specified for the PARALLEL parameter.

No parallel query (PQ) slaves are assigned because network mode import does not use parallel query (PQ) slaves. Multiple table_data objects can be unloaded at the same time, but each table_data object is unloaded using a single process.

RAC中并行使用:同expdp,如果并行设置为1则在master process的节点产生work process, 此时dumpfile可以使用本地存储;如果并行度设置大于1,dumpfile就必须放在共享存储中

Transportable tablespace metadata cannot be imported in parallel.

Metadata cannot be imported in parallel when the NETWORK_LINK parameter is also used

The following ojbects cannot be imported in parallel:

TRIGGER

VIEW

OBJECT_GRANT

SEQUENCE

CONSTRAINT

REF_CONSTRAINT

NETWORK_LINK=source_database_link

如果使用了transportable参数,必须在导入前把文件从源库复制到本地库

If an import operation is performed over an unencrypted network link, then all data is imported as clear text even if it is encrypted in the database.

The Import NETWORK_LINK parameter is not supported for tables containing SecureFiles that have ContentType set or that are currently stored outside of the SecureFiles segment through Oracle Database File System Links.

Network imports do not support the use of evolved types.

PARTITION_OPTIONS=[NONE | DEPARTITION | MERGE]

默认为none,但如果export/import时使用了tables+ tranportable=always时默认为departition.

none表示导出是什么格式导入就是什么格式;departition是把每个分区表或二级分区表分别转为独立的新的普通表,merge表示把所有分区合并为同一个普通表,

关于分区表的并行:

Parallel processing during import of partitioned tables is subject to the following:

If a partitioned table is imported into an existing partitioned table, then Data Pump only processes one partition or subpartition at a time, regardless of any value that might be specified with the PARALLEL parameter.

If the table into which you are importing does not already exist and Data Pump has to create it, then the import runs in parallel up to the parallelism specified on the PARALLEL parameter when the import is started.

Restrictions

  1. If the export operation that created the dump file was performed with the transportable method and if a partition or subpartition was specified, then the import operation must use the departition option.
  2. If the export operation that created the dump file was performed with the transportable method, then the import operation cannot use PARTITION_OPTIONS=MERGE.
  3. If there are any grants on objects being departitioned, then an error message is generated and the objects are not loaded.

REUSE_DATAFILES=[YES | NO默认]

默认create tablespace中的数据文件已存在会报错,如果设置yes则会reuse已存在的文件,可能导致数据丢失

If the default (n) is used and the data files specified in CREATE TABLESPACE statements already exist, then an error message from the failing CREATE TABLESPACE statement is issued, but the import job continues.

If this parameter is specified as y, then the existing data files are reinitialized. Specifying REUSE_DATAFILES=YES may result in a loss of data.

SKIP_UNUSABLE_INDEXES=[YES | NO]

是否导入unusable状态的索引,如果设置YES则不会导入unusable索引,如果设置NO则连同unusable的表或分区都不会导入。这个选项主要用于导入已存在的表

If SKIP_UNUSABLE_INDEXES is set to NO, and a table or partition with an index in the Unusable state is encountered, then that table or partition is not loaded.

If the SKIP_UNUSABLE_INDEXES parameter is not specified, then the setting of the Oracle Database configuration parameter, SKIP_UNUSABLE_INDEXES (whose default value is y), will be used to determine how to handle unusable indexes.

If indexes used to enforce constraints are marked unusable, then the data is not imported into that table.

This parameter is useful only when importing data into an existing table. It has no practical effect when a table is created as part of an import because in that case, the table and indexes are newly created and will not be marked unusable.

SQLFILE=[directory_object:]file_name

$ impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql

把命令要导入的对象的DDL导入到sqlfile中(没有insert语句),实际数据并未导入到数据库。如果文件已存在则覆盖.如果使用了ASM,则SQLFILE不能指定到ASM中

Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.The SQL is not actually executed, and the target system remains unchanged

密码不会包含在sqlfile中,导入前需要修改脚本: 

Note that passwords are not included in the SQL file. For example, if a CONNECT statement is part of the DDL that was executed, then it will be replaced by a comment with only the schema name shown. In the following example, the dashes (--) indicate that a comment follows, and the hrschema name is shown, but not the password.

-- CONNECT hr

Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hrschema.

Data Pump places any ALTER SESSION statements at the top of the SQL file created by Data Pump import. So if the import operation has different connection statements, you must manually copy each of the ALTER SESSION statements and paste them after the appropriate CONNECT statements.

For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.

TABLE_EXISTS_ACTION=[SKIP| APPEND | TRUNCATE | REPLACE]

默认 SKIP,但使用CONTENT=DATA_ONLY时默认为APPEND

SKIP: 忽略已存在表继续其它表导入,不能用于CONTENT=DATA_ONLY

APPEND:添加不影响已存在数据

TRUNCATE: truncate后导入, 不能用于cluster表

REPLACE: drop table后创建再导入,不能用于CONTENT=DATA_ONLY

注:使用replace或truncate要确认表没有引用约束

使用skip, append或truncate不会更改已存在的indexes, grants, triggers and constraints

使用replace会drop表,再把源库中表的关联对象导入

If the existing table has active constraints and triggers, then it is loaded using the external tables access method. If any row violates an active constraint, then the load fails and no data is loaded. You can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the Import command line.

If you have data that must be loaded, but may cause constraint violations, then consider disabling the constraints, loading the data, and then deleting the problem rows before reenabling the constraints.

When you use APPEND, the data is always loaded into new space; existing space, even if available, is not reused. For this reason, you may want to compress your data after the load.

当数据泵检测到源表和目标表不匹配(两个表的列数不同或目标表的列名不存在于源表中)时,它比较两个表之间的列名。 如果表中至少有一个公共列,则将公共列的数据导入到表中(假设数据类型兼容)

Note: When Data Pump detects that the source table and target table do not match (the two tables do not have the same number of columns or the target table has a column name that is not present in the source table), it compares column names between the two tables. If the tables have at least one column in common, then the data for the common columns is imported into the table (assuming the data types are compatible). The following restrictions apply:

  1. This behavior is not supported for network imports.
  2. The following types of columns cannot be dropped: object columns, object attributes, nested table columns, and ref columns based on a primary key.

TABLESPACES=tablespace_name [, ...]

dumpfile可以是用full, schema, tablespace, or table-mode导出文件或使用network_link

使用以下方式导入会自动创建表空间,其它情况必须提前手动创建表空间

The import is being done in FULL or TRANSPORT_TABLESPACES mode

The import is being done in table mode with TRANSPORTABLE=ALWAYS

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TRANSFORM

Enables you to alter object creation DDL for objects being imported.

TRANSFORM = transform_name:value[:object_type]

The transform_name specifies the name of the transform. Specifying an object_type is optional. If supplied, it designates the object type to which the transform will be applied. If no object type is specified, then the transform applies to all valid object types.

Example

下面是原库的hr.employees建表语句

CREATE TABLE "HR"."EMPLOYEES"

   ( "EMPLOYEE_ID" NUMBER(6,0),

     "FIRST_NAME" VARCHAR2(20),

     "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE,

     "EMAIL" VARCHAR2(25) CONSTRAINT "EMP_EMAIL_NN" NOT NULL ENABLE,

     "PHONE_NUMBER" VARCHAR2(20),

     "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE,

     "JOB_ID" VARCHAR2(10) CONSTRAINT "EMP_JOB_NN" NOT NULL ENABLE,

     "SALARY" NUMBER(8,2),

     "COMMISSION_PCT" NUMBER(2,2),

     "MANAGER_ID" NUMBER(6,0),

     "DEPARTMENT_ID" NUMBER(4,0)

   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING

  STORAGE(INITIAL 10240 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 121

  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)

  TABLESPACE "SYSTEM" ;

如果不想保留storage子句与tablespace子句,可以使用:

$ impdp hr TABLES=hr.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp TRANSFORM=SEGMENT_ATTRIBUTES:N:table

去除storage子句但保留tablespace子句:

$ impdp hr TABLES=hr.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp

  TRANSFORM=STORAGE:N:table

The available transforms are as follows:

  1. DISABLE_ARCHIVE_LOGGING:[Y | N]

表示在导入索引及表时是否开启nologging

This transform is valid for the following object types: INDEX and TABLE.

If set to Y, then the logging attributes for the specified object types (TABLE and/or INDEX) are disabled before the data is imported. If set to N(the default), then archive logging is not disabled during import. After the data has been loaded, the logging attributes for the objects are restored to their original settings. If no object type is specified, then the DISABLE_ARCHIVE_LOGGING behavior is applied to both TABLE and INDEX object types. This transform works for both file mode imports and network mode imports. It does not apply to transportable tablespace imports. If the database is in FORCE LOGGING mode, then the DISABLE_ARCHIVE_LOGGING option will not disable logging when indexes and tables are created.

  1. INMEMORY:[Y | N]

This transform is valid for the following object types: TABLE and TABLESPACE.

The INMEMORY transform is related to the In-Memory Column Store (IM column store). The IM column store is an optional portion of the system global area (SGA) that stores copies of tables, table partitions, and other database objects. In the IM column store, data is populated by column rather than row as it is in other parts of the SGA, and data is optimized for rapid scans. The IM column store does not replace the buffer cache, but acts as a supplement so that both memory areas can store the same data in different formats. The IM column store is included with the Oracle Database In-Memory option.

If Y (the default value) is specified on import, then Data Pump keeps the IM column store clause for all objects that have one. When those objects are recreated at import time, Data Pump generates the IM column store clause that matches the setting for those objects at export time.

If N is specified on import, then Data Pump drops the IM column store clause from all objects that have one. If there is no IM column store clause for an object that is stored in a tablespace, then the object inherits the IM column store clause from the tablespace. So if you are migrating a database and want the new database to use IM column store features, you could pre-create the tablespaces with the appropriate IM column store clause and then use TRANSFORM=INMEMORY:N on the import command. The object would then inherit the IM column store clause from the new pre-created tablespace.

If you do not use the INMEMORY transform, then you must individually alter every object to add the appropriate IM column store clause.

The INMEMORY transform is available only in Oracle Database 12c Release 1 (12.1.0.2) or later.

  1. INMEMORY_CLAUSE:"string with a valid in-memory parameter"

This transform is valid for the following object types: TABLE and TABLESPACE.

The INMEMORY_CLAUSE transform is related to the In-Memory Column Store (IM column store). The IM column store is an optional portion of the system global area (SGA) that stores copies of tables, table partitions, and other database objects. In the IM column store, data is populated by column rather than row as it is in other parts of the SGA, and data is optimized for rapid scans. The IM column store does not replace the buffer cache, but acts as a supplement so that both memory areas can store the same data in different formats. The IM column store is included with the Oracle Database In-Memory option.

When you specify this transform, Data Pump uses the contents of the string as the INMEMORY_CLAUSE for all objects being imported that have an IM column store clause in their DDL. This transform is useful when you want to override the IM column store clause for an object in the dump file.

The string that you supply must be enclosed in double quotation marks. If you are entering the command on the command line, be aware that some operating systems may strip out the quotation marks during parsing of the command, which will cause an error. You can avoid this by using backslash escape characters. For example:

transform=inmemory_clause:\"INMEMORY MEMCOMPRESS FOR DML PRIORITY CRITICAL\"

Alternatively you can put parameters in a parameter file, and the quotation marks will be maintained during processing.

  1. LOB_STORAGE:[SECUREFILE | BASICFILE | DEFAULT | NO_CHANGE]

This transform is valid for the object type TABLE.

LOB segments are created with the specified storage, either SECUREFILE or BASICFILE. If the value is NO_CHANGE (the default), the LOB segments are created with the same storage they had in the source database. If the value is DEFAULT, then the keyword (SECUREFILE or BASICFILE) is omitted and the LOB segment is created with the default storage.

Specifying this transform changes LOB storage for all tables in the job, including tables that provide storage for materialized views.

The LOB_STORAGE transform is not valid in transportable import jobs.

  1. OID:[Y | N]

This transform is valid for the following object types: INC_TYPE, TABLE, and TYPE

If Y (the default value) is specified on import, then the exported OIDs are assigned to new object tables and types. Data Pump also performs OID checking when looking for an existing matching type on the target database.

If N is specified on import, then:

The assignment of the exported OID during the creation of new object tables and types is inhibited. Instead, a new OID is assigned. This can be useful for cloning schemas, but does not affect referenced objects.

Prior to loading data for a table associated with a type, Data Pump skips normal type OID checking when looking for an existing matching type on the target database. Other checks using a type's hash code, version number, and type name are still performed.

  1. PCTSPACE:some_number_greater_than_zero

This transform is valid for the following object types: CLUSTER, CONSTRAINT, INDEX, ROLLBACK_SEGMENT, TABLE, and TABLESPACE.

The value supplied for this transform must be a number greater than zero. It represents the percentage multiplier used to alter extent allocations and the size of data files.

Note that you can use the PCTSPACE transform with the Data Pump Export SAMPLE parameter so that the size of storage allocations matches the sampled data subset. (See "SAMPLE".)

  1. SEGMENT_ATTRIBUTES:[Y | N]

This transform is valid for the following object types: CLUSTER, CONSTRAINT, INDEX, ROLLBACK_SEGMENT, TABLE, and TABLESPACE.

If the value is specified as Y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is Y.

  1. SEGMENT_CREATION:[Y | N]

This transform is valid for the object type TABLE.

If set to Y (the default), then this transform causes the SQL SEGMENT CREATION clause to be added to the CREATE TABLE statement. That is, the CREATE TABLE statement will explicitly say either SEGMENT CREATION DEFERRED or SEGMENT CREATION IMMEDIATE. If the value is N, then the SEGMENT CREATION clause is omitted from the CREATE TABLE statement. Set this parameter to N to use the default segment creation attributes for the table(s) being loaded. (This functionality is available starting with Oracle Database 11g release 2 (11.2.0.2).)

  1. STORAGE:[Y | N]

This transform is valid for the following object types: CLUSTER, CONSTRAINT, INDEX, ROLLBACK_SEGMENT, and TABLE.

If the value is specified as Y, then the storage clauses are included, with appropriate DDL. The default is Y. This parameter is ignored if SEGMENT_ATTRIBUTES=N.

  1. TABLE_COMPRESSION_CLAUSE:[NONE | compression_clause]

This transform is valid for the object type TABLE.

If NONE is specified, then the table compression clause is omitted (and the table gets the default compression for the tablespace). Otherwise the value is a valid table compression clause (for example, NOCOMPRESS, COMPRESS BASIC, and so on). Tables are created with the specified compression. See Oracle Database SQL Language Reference for information about valid table compression syntax.

If the table compression clause is more than one word, then it must be contained in single or double quotation marks. Additionally, depending on your operating system requirements, you may need to enclose the clause in escape characters (such as the backslash character). For example:

TRANSFORM=TABLE_COMPRESSION_CLAUSE:\"COLUMN STORE COMPRESS FOR QUERY HIGH\"

Specifying this transform changes the type of compression for all tables in the job, including tables that provide storage for materialized views.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

VERSION=[COMPATIBLE默认| LATEST | version_string]

可以用于dumpfile或network_link,默认就是原库的commpatible值,所以在impdp中尽量少用此选项

COMPATIBLE - 目标库的compatible参数设置值

LATEST - 最新版本,如果目标库的compatible低于此值,此选项将无意义,所以跟上面没啥区别

version_string - A specific database release (for example, 11.2.0).

VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], ...

用于network_link选项,使用同expdp中的views_as_tables

VIEWS_AS_TABLES=[schema_name.]view_name,...

用于dumpfile中,格式有不同,使用没差别

  • 0
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值