EXPDP IMPDP 知识总结

Data Pump Export

 

 

ATTACH

Default: job currently in the user's schema, if there is only one

Purpose(目的)

Attaches the client session to an existing export job and automatically places you in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the Export prompt.

该选项用于在客户会话与已存在导出作用之间建立关联

Syntax and Description(语法和描述)
ATTACH [=[schema_name.]job_name]
 

The schema_name is optional. To specify a schema other than your own, you must have the EXP_FULL_DATABASE role.

The job_name is optional if only one export job is associated with your schema and the job is active. To attach to a stopped job, you must supply the job name. To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS view or the USER_DATAPUMP_JOBS view.

When you are attached to the job, Export displays a description of the job and then displays the Export prompt.

       Schema_name用于指定方案名,

job_name用于指定导出作业名.注意,如果使用ATTACH选项,在命令行除了连接字符串和ATTACH选项外,不能指定任何其他选项,

Restrictions(限制)

When you specify the ATTACH parameter, you cannot specify any other parameters except for the connection string (user/password).

You cannot attach to a job in another schema unless it is already running.

If the dump file set or master table for the job have been deleted, the attach operation will fail.

Altering the master table in any way will lead to unpredictable results.

Example(示例)

The following is an Example(示例) of using the ATTACH parameter. It assumes that the job, hr.export_job, already exists.

> expdp hr/hr ATTACH=hr.export_job

 

 

COMPRESSION

Default: METADATA_ONLY

Purpose(目的)

Specifies whether to compress metadata before writing to the dump file set.

Syntax and Description(语法和描述)
COMPRESSION=(METADATA_ONLY | NONE)
 
  • METADATA_ONLY results in all metadata being written to the dump file in compressed format. This is the default.
  • NONE disables compression for the entire unload.
Example(示例)

The following is an Example(示例) of using the COMPRESSION parameter:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY
 

This command will execute a schema-mode export that will compress all metadata before writing it out to the dump file, hr_comp.dmp. It defaults to a schema-mode export because no export mode is specified.

CONTENT

Default: ALL

Purpose(目的)

Enables you to filter what Export unloads: data only, metadata only, or both.

该选项用于指定要导出的内容.默认值为ALL

Syntax and Description(语法和描述)
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
 
  • ALL unloads both data and metadata. This is the default.
  • DATA_ONLY unloads only table row data; no database object definitions are unloaded.
  • METADATA_ONLY unloads only database object definitions; no table row data is unloaded.

当设置CONTENTALL ,将导出对象定义及其所有数据.

DATA_ONLY,只导出对象数据,

METADATA_ONLY,只导出对象定义。

Restrictions(限制)

The CONTENT=METADATA_ONLY parameter cannot be used in conjunction with the parameter TRANSPORT_TABLESPACES (transportable-tablespace-mode).

Example(示例)

The following is an Example(示例) of using the CONTENT parameter:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLY
该操作只会导出  HR 的 metadata

This command will execute a schema-mode export that will unload only the metadata associated with the hr schema. It defaults to a schema-mode export of the hr schema because no export mode is specified.

DIRECTORY

Default: DATA_PUMP_DIR

Purpose(目的)

Specifies the default location to which Export can write the dump file set and the log file.  

指定转储文件和日志文件所在的目录,DIRECTORY=directory_object

 

Syntax and Description(语法和描述)

DIRECTORY=directory_object
 

The directory_object is the name of a database directory object (not the name of an actual directory). Upon installation, privileged users have access to a default directory object named DATA_PUMP_DIR. Users with access to DATA_PUMP_DIR need not use the DIRECTORY parameter at all.

A directory object specified on the DUMPFILE or LOGFILE parameter overrides any directory object that you specify for the DIRECTORY parameter.

   Directory_object用于指定目录对象名称.需要注意,目录对象是使用CREATE DIRECTORY语句建立的对象,而不是OS 目录。

Example(示例) 1

The following is an Example(示例) of using the DIRECTORY parameter:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY
 

The dump file, employees.dmp, will be written to the path that is associated with the directory object dpump_dir1.

 

DUMPFILE

Default: expdat.dmp

Purpose(目的)

Specifies the names, and optionally, the directory objects of dump files for an export job. 

用于指定转储文件的名称,默认名称为expdat.dmp

   Directory_object用于指定目录对象名,file_name用于指定转储文件名.需要注意,如果不指定directory_object,导出工具会自动使用DIRECTORY选项指定的目录对象

Syntax and Description(语法和描述)
DUMPFILE=[directory_object:]file_name [, ...]
 

The directory_object is optional if one has already been established by the DIRECTORY parameter. If you supply a value here, it must be a directory object that already exists and that you have access to. A database directory object that is specified as part of the DUMPFILE parameter overrides a value specified by the DIRECTORY parameter or by the default directory object.

You can supply multiple file_name specifications as a comma-delimited list or in separate DUMPFILE parameter specifications. If no extension is given for the filename, then Export uses the default file extension of .dmp. The filenames can contain a substitution variable (%U), which implies that multiple files may be generated. The substitution variable is expanded in the resulting filenames into a 2-digit, fixed-width, incrementing integer starting at 01 and ending at 99. If a file specification contains two substitution variables, both are incremented at the same time. For Example(示例), exp%Uaa%U.dmp would resolve to exp01aa01.dmp, exp02aa02.dmp, and so forth.

If the FILESIZE parameter is specified, each dump file will have a maximum of that size in bytes and be nonextensible. If more space is required for the dump file set and a template with a substitution variable (%U) was supplied, a new dump file is automatically created of the size specified by FILESIZE, if there is room on the device.

As each file specification or file template containing a substitution variable is defined, it is instantiated into one fully qualified filename and Export attempts to create it. The file specifications are processed in the order in which they are specified. If the job needs extra files because the maximum file size is reached, or to keep parallel workers active, then additional files are created if file templates with substitution variables were specified.

Although it is possible to specify multiple files using the DUMPFILE parameter, the export job may only require a subset of those files to hold the exported data. The dump file set displayed at the end of the export job shows exactly which files were used. It is this list of files that is required in order to perform an import operation using this dump file set.

Restrictions(限制)

If there are preexisting files that match the resulting filenames, an error is generated. The existing dump files will not be overwritten.

Example(示例)

The following is an Example(示例) of using the DUMPFILE parameter:

> expdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp,
 exp2%U.dmp PARALLEL=3
 

The dump file, exp1.dmp, will be written to the path associated with the directory object dpump_dir2 because dpump_dir2 was specified as part of the dump file name, and therefore overrides the directory object specified with the DIRECTORY parameter. Because all three parallel processes will be given work to perform during this job, the exp201.dmp and exp202.dmp dump files will be created and they will be written to the path associated with the directory object, dpump_dir1, that was specified with the DIRECTORY parameter.

 

ENCRYPTION_PASSWORD

Default: none

Purpose(目的)

Specifies a key for encrypting encrypted column data in the export dumpfile.

Syntax and Description(语法和描述)
ENCRYPTION_PASSWORD = password
 

The password value that is supplied specifies a key for re-encrypting encrypted table columns so that they are not written as clear text in the dump file set. If the export operation involves encrypted table columns, but an encryption password is not supplied, then the encrypted columns will be written to the dump file set as clear text and a warning will be issued.

To use the ENCRYPTION_PASSWORD parameter, you must have Transparent Data Encryption set up. See Oracle Database Advanced Security Administrator's Guide for more information about Transparent Data Encryption.

 

 

Restrictions(限制)

The ENCRYPTION_PASSWORD parameter applies only to columns that already have encrypted data. Data Pump neither provides nor supports encryption of entire dump files.

To use the ENCRYPTION_PASSWORD parameter, you must have Transparent Data Encryption set up. See Oracle Database Advanced Security Administrator's Guide for more information about Transparent Data Encryption.

The ENCRYPTION_PASSWORD parameter is only used to export encrypted columns used by Transparent Data Encryption functionality.

For network exports, the ENCRYPTION_PASSWORD parameter is not supported with user-defined external tables that have encrypted columns. The table will be skipped and an error message will be displayed, but the job will continue.

Encryption attributes for all columns must match between the exported table definition and the target table. For Example(示例), suppose you have a table, EMP, and one of its columns is named EMPNO. Both of the following situations would result in an error because the encryption attribute for the EMP column in the source table would not match the encryption attribute for the EMP column in the target table:

  • The EMP table is exported with the EMPNO column being encrypted, but prior to importing the table you remove the encryption attribute from the EMPNO column.
  • The EMP table is exported without the EMPNO column being encrypted, but prior to importing the table you enable encryption on the EMPNO column.
Example(示例)

In the following Example(示例), an encryption password, 123456, is assigned to the dump file, dpcd2be1.dmp.

expdp hr/hr TABLES=employee_s_encrypt DIRECTORY=dpump_dir DUMPFILE=dpcd2be1.dmp ENCRYPTION_PASSWORD=123456
 

Encrypted columns in the employee_s_encrypt table, will not be written as clear text in the dpcd2be1.dmp dump file. Note that in order to subsequently import the dpcd2be1.dmp file created by this Example(示例), you will need to supply the same encryption password. (See "ENCRYPTION_PASSWORD" for an Example(示例) of an import operation using the ENCRYPTION_PASSWORD parameter.)

ESTIMATE

Default: BLOCKS

Purpose(目的)

Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes). The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.

 

指定估算被导出表所占用磁盘空间分方法.默认值是BLOCKS

   设置为BLOCKS,oracle会按照目标对象所占用的数据块个数乘以数据块尺寸估算对象占用的空间,设置为STATISTICS,根据最近统计值估算对象占用空间:

Syntax and Description(语法和描述)
ESTIMATE={BLOCKS | STATISTICS}
 
  • BLOCKS - The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.
  • STATISTICS - The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.
Restrictions(限制)

If the Data Pump export job involves compressed tables, the default size estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, use ESTIMATE=STATISTICS.

Example(示例)

The following Example(示例) shows a use of the ESTIMATE parameter in which the estimate is calculated using statistics for the employees table:

> expdp hr/hr TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1
 DUMPFILE=estimate_stat.dmp

ESTIMATE_ONLY

Default: n

Purpose(目的)

Instructs Export to estimate the space that a job would consume, without actually performing the export operation.

   指定是否只估算导出作业所占用的磁盘空间,默认值为N

设置为Y,导出作用只估算对象所占用的磁盘空间,而不会执行导出作业,N,不仅估算对象所占用的磁盘空间,还会执行导出操作.

Syntax and Description(语法和描述)
ESTIMATE_ONLY={y | n}
 

If ESTIMATE_ONLY=y, then Export estimates the space that would be consumed, but quits without actually performing the export operation.

Example(示例)

The following shows an Example(示例) of using the ESTIMATE_ONLY parameter to determine how much space an export of the HR schema will take.

> expdp hr/hr ESTIMATE_ONLY=y NOLOGFILE=y

EXCLUDE

Default: none

Purpose(目的)

Enables you to filter the metadata that is exported by specifying objects and object types that you want excluded from the export operation.

该选项用于指定执行操作时释放要排除对象类型或相关对象

   Object_type用于指定要排除的对象类型,name_clause用于指定要排除的具体对象.EXCLUDEINCLUDE不能同时使用。

Syntax and Description(语法和描述)
EXCLUDE=object_type[:name_clause] [, ...]
 

All object types for the given mode of export will be included except those specified in an EXCLUDE statement. If an object is excluded, all of its dependent objects are also excluded. For Example(示例), excluding a table will also exclude all indexes and triggers on the table.

The name_clause is optional. It allows selection of specific objects within an object type. It is a SQL expression used as a filter on the type's object names. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for Example(示例), it is applicable to TABLE, but not to GRANT). The name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings. For Example(示例), you could set EXCLUDE=INDEX:"LIKE 'EMP%'" to exclude all indexes whose names start with emp.

If no name_clause is provided, all objects of the specified type are excluded.

More than one EXCLUDE statement can be specified.

Oracle recommends that you place EXCLUDE clauses in a parameter file to avoid having to use escape characters on the command line.

If the object_type you specify is CONSTRAINT, GRANT, or USER, you should be aware of the effects this will have, as described in the following paragraphs.

Excluding Constraints

The following constraints cannot be excluded:

  • NOT NULL constraints
  • Constraints needed for the table to be created and loaded successfully; for Example(示例), primary key constraints for index-organized tables, or REF SCOPE and WITH ROWID constraints for tables with REF columns

This means that the following EXCLUDE statements will be interpreted as follows:

  • EXCLUDE=CONSTRAINT will exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading.
  • EXCLUDE=REF_CONSTRAINT will exclude referential integrity (foreign key) constraints.
Excluding Grants and Users

Specifying EXCLUDE=GRANT excludes object grants on all object types and system privilege grants.

Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.

To exclude a specific user and all objects of that user, specify a filter such as the following (where hr is the schema name of the user you want to exclude):

EXCLUDE=SCHEMA:"='HR'"
 

If you try to exclude a user by using a statement such as EXCLUDE=USER:"='HR'", then only the information used in CREATE USER hr DDL statements will be excluded, and you may not get the results you expect.

Restrictions(限制)

The EXCLUDE and INCLUDE parameters are mutually exclusive.

Example(示例)

The following is an Example(示例) of using the EXCLUDE statement.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,
PACKAGE, FUNCTION
 

This will result in a schema-mode export in which all of the hr schema will be exported except its views, packages, and functions.

 

FILESIZE

Default: 0 (unlimited)

Purpose(目的)

Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable.

 

指定导出文件的最大尺寸,默认为0,(表示文件尺寸没有限制)

Syntax and Description(语法和描述)
FILESIZE=integer[B | K | M | G]
 

The integer can be followed by B, K, M, or G (indicating bytes, kilobytes, megabytes, and gigabytes respectively). Bytes is the default. The actual size of the resulting file may be rounded down slightly to match the size of the internal blocks used in dump files.

Restrictions(限制)

The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes.

Example(示例)

The following shows an Example(示例) in which the size of the dump file is set to 3 megabytes:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3M
 

If three megabytes had not been sufficient to hold all the exported data, then the following error would have been displayed and the job would have stopped:

ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes
 

The actual number of bytes that could not be allocated may vary. Also, this number does not represent the amount of space needed to complete the entire export operation. It indicates only the size of the current object that was being exported when the job ran out of dump file space.This situation can be corrected by first attaching to the stopped job, adding one or more files using the ADD_FILE command, and then restarting the operation.

FLASHBACK_SCN

Default: none

Purpose(目的)

Specifies the system change number (SCN) that Export will use to enable the Flashback Query utility.

 

指定导出特定SCN时刻的表数据。FLASHBACK_SCN=scn_value

   Scn_value用于标识SCN.FLASHBACK_SCNFLASHBACK_TIME不能同时使用

Syntax and Description(语法和描述)
FLASHBACK_SCN=scn_value
 

The export operation is performed with data that is consistent as of the specified SCN. If the NETWORK_LINK parameter is specified, the SCN refers to the SCN of the source database

Restrictions(限制)

FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.

The FLASHBACK_SCN parameter pertains only to the flashback query capability of Oracle Database 10g release 1. It is not applicable to Flashback Database, Flashback Drop, or any other flashback capabilities new as of Oracle Database 10g release 1.

Example(示例)

The following Example(示例) assumes that an existing SCN value of 384632 exists. It exports the hr schema up to SCN 384632.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632

 

 

FLASHBACK_TIME

Default: none

Purpose(目的)

The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.

 

指定导出特定时间点的表数据

 

Syntax and Description(语法和描述)
FLASHBACK_TIME="TO_TIMESTAMP(time-value)"
 

Because the TO_TIMESTAMP value is enclosed in quotation marks, it would be best to put this parameter in a parameter file. Otherwise, you might need to use escape characters on the command line in front of the quotation marks. See Use of Quotation Marks On the Data Pump Command Line.

Restrictions(限制)

FLASHBACK_TIME and FLASHBACK_SCN are mutually exclusive.

The FLASHBACK_TIME parameter pertains only to the flashback query capability of Oracle Database 10g release 1. It is not applicable to Flashback Database, Flashback Drop, or any other flashback capabilities new as of Oracle Database 10g release 1.

Example(示例)

You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME procedure accepts. For Example(示例), suppose you have a parameter file, flashback.par, with the following contents:

DIRECTORY=dpump_dir1
DUMPFILE=hr_time.dmp
FLASHBACK_TIME="TO_TIMESTAMP('25-08-2003 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"
 

You could then issue the following command:

> expdp hr/hr PARFILE=flashback.par
 

The export operation will be performed with data that is consistent with the SCN that most closely matches the specified time.

 

 

 

 

FULL

Default: n

Purpose(目的)

Specifies that you want to perform a full database mode export.

 

指定数据库模式导出,默认为N FULL={Y | N} 。为Y,标识执行数据库导出

Syntax and Description(语法和描述)
FULL={y | n}
 

FULL=y indicates that all data and metadata are to be exported. Filtering can restrict what is exported using this export mode. See Filtering During Export Operations.

To perform a full export, you must have the EXP_FULL_DATABASE role.

Restrictions(限制)
  • The following system schemas are not exported as part of a Full export because the metadata they contain is exported as part of other objects in the dump file set: SYS, ORDSYS, EXFSYS, MDSYS, DMSYS, CTXSYS, ORDPLUGINS, LBACSYS, XDB, SI_INFORMTN_SCHEMA, DIP, DBSNMP, and WMSYS.
  • Grants on objects owned by the SYS schema are never exported.
Example(示例)

The following is an Example(示例) of using the FULL parameter. The dump file, expfull.dmp is written to the dpump_dir2 directory.

> expdp hr/hr DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=y NOLOGFILE=y

HELP

Default: N

Purpose(目的)

Displays online help for the Export utility.

Syntax and Description(语法和描述)
HELP = {y | n}
 

If HELP=y is specified, Export displays a summary of all Export command-line parameters and interactive commands.

Example(示例)
> expdp HELP = y
 

This Example(示例) will display a brief description of all Export parameters and commands.

INCLUDE

Default: none

Purpose(目的)

Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported.

 

指定导出时要包含的对象类型及相关对象。

Syntax and Description(语法和描述)
INCLUDE = object_type[:name_clause] [, ...]
 

Only object types explicitly specified in INCLUDE statements, and their dependent objects, are exported. No other object types, including the schema definition information that is normally part of a schema-mode export when you have the EXP_FULL_DATABASE role, are exported.

To see a list of valid object type path names for use with the INCLUDE parameter, you can query the following views: DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS.

The name_clause is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for Example(示例), it is applicable to TABLE, but not to GRANT). The optional name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings.

Oracle recommends that INCLUDE statements be placed in a parameter file; otherwise you might have to use operating system-specific escape characters on the command line before quotation marks. See Use of Quotation Marks On the Data Pump Command Line.

For Example(示例), suppose you have a parameter file named hr.par with the following content:

SCHEMAS=HR
DUMPFILE=expinclude.dmp
DIRECTORY=dpump_dir1
LOGFILE=expinclude.log
INCLUDE=TABLE:"IN ('EMPLOYEES', 'DEPARTMENTS')"
INCLUDE=PROCEDURE
INCLUDE=INDEX:"LIKE 'EMP%'"
 

You could then use the hr.par file to start an export operation, without having to enter any other parameters on the command line:

> expdp hr/hr parfile=hr.par
Restrictions(限制)
  • The INCLUDE and EXCLUDE parameters are mutually exclusive.
  • Grants on objects owned by the SYS schema are never exported.
Example(示例)

The following Example(示例) performs an export of all tables (and their dependent objects) in the hr schema:

> expdp hr/hr INCLUDE=TABLE DUMPFILE=dpump_dir1:exp_inc.dmp NOLOGFILE=y

JOB_NAME

Default: system-generated name of the form SYS_EXPORT_<mode>_NN

Purpose(目的)

Used to identify the export job in subsequent actions, such as when the ATTACH parameter is used to attach to a job, or to identify the job using the DBA_DATAPUMP_JOBS or USER_DATAPUMP_JOBS views. The job name becomes the name of the master table in the current user's schema. The master table is used to control the export job.

 

指定要导出作用的名称

Syntax and Description(语法和描述)
JOB_NAME=jobname_string
 

The jobname_string specifies a name of up to 30 bytes for this export job. The bytes must represent printable characters and spaces. If spaces are included, the name must be enclosed in single quotation marks (for Example(示例), 'Thursday Export'). The job name is implicitly qualified by the schema of the user performing the export operation.

The default job name is system-generated in the form SYS_EXPORT_<mode>_NN, where NN expands to a 2-digit incrementing integer starting at 01. An Example(示例) of a default name is 'SYS_EXPORT_TABLESPACE_02'.

Example(示例)

The following Example(示例) shows an export operation that is assigned a job name of exp_job:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job
NOLOGFILE=y

LOGFILE

Default: export.log

Purpose(目的)

Specifies the name, and optionally, a directory, for the log file of the export job.

 

   指定导出日志文件文件的名称

Syntax and Description(语法和描述)
LOGFILE=[directory_object:]file_name
 

You can specify a database directory_object previously established by the DBA, assuming that you have access to it. This overrides the directory object specified with the DIRECTORY parameter.

The file_name specifies a name for the log file. The default behavior is to create a file named export.log in the directory referenced by the directory object specified in the DIRECTORY parameter.

All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the STATUS command in interactive mode.)

A log file is always created for an export job unless the NOLOGFILE parameter is specified. As with the dump file set, the log file is relative to the server and not the client.

An existing file matching the filename will be overwritten.

Example(示例)

The following Example(示例) shows how to specify a log file name if you do not want to use the default:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp LOGFILE=hr_export.log

NETWORK_LINK

Default: none

Purpose(目的)

Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.

 

指定数据库链接名,如果要将远程数据库对象导出到本地例程的转储文件中,必须设置该选项

Syntax and Description(语法和描述)
NETWORK_LINK=source_database_link
 

The NETWORK_LINK parameter initiates an export using a database link. This means that the system to which the expdp client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to a dump file set back on the connected system.

The source_database_link provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, you or your DBA must create one. For more information about the CREATE DATABASE LINK statement, see Oracle Database SQL Reference.

If the source database is read-only, then the user on the source database must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.

Restrictions(限制)

When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be exported (not partitions of tables).

The only types of database links supported by Data Pump Export are: public, fixed-user, and connected-user. Current-user database links are not supported.

Example(示例)

The following is an Example(示例) of using the NETWORK_LINK parameter. The source_database_link would be replaced with the name of a valid database link that must already exist.

> expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
  DUMPFILE=network_export.dmp LOGFILE=network_export.log

NOLOGFILE

Default: n

Purpose(目的)

Specifies whether to suppress creation of a log file.

该选项用于指定禁止生成导出日志文件,默认值为N.

Syntax and Description(语法和描述)
NOLOGFILE={y | n}
 

Specify NOLOGFILE=y to suppress the default behavior of creating a log file. Progress and error information is still written to the standard output device of any attached clients, including the client that started the original export operation. If there are no clients attached to a running job and you specify NOLOGFILE=y, you run the risk of losing important progress and error information.

Example(示例)

The following is an Example(示例) of using the NOLOGFILE parameter:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp NOLOGFILE=y
 

This command results in a schema-mode export in which no log file is written.

PARALLEL

Default: 1

Purpose(目的)

Specifies the maximum number of threads of active execution operating on behalf of the export job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process and worker processes acting as query coordinators in parallel query operations do not count toward this total.

This parameter enables you to make trade-offs between resource consumption and elapsed time.

指定执行导出操作的并行进程个数,默认值为1

 

Syntax and Description(语法和描述)
PARALLEL=integer
 

The value you specify for integer should be less than, or equal to, the number of files in the dump file set (or you should specify substitution variables in the dump file specifications). Because each active worker process or I/O server process writes exclusively to one file at a time, an insufficient number of files can have adverse effects. Some of the worker processes will be idle while waiting for files, thereby degrading the overall performance of the job. More importantly, if any member of a cooperating group of parallel I/O server processes cannot obtain a file for output, then the export operation will be stopped with an ORA-39095 error. Both situations can be corrected by attaching to the job using the Data Pump Export utility, adding more files using the ADD_FILE command while in interactive mode, and in the case of a stopped job, restarting the job.

To increase or decrease the value of PARALLEL during job execution, use interactive-command mode. Decreasing parallelism does not result in fewer worker processes associated with the job; it merely decreases the number of worker processes that will be executing at any given time. Also, any ongoing work must reach an orderly completion point before the decrease takes effect. Therefore, it may take a while to see any effect from decreasing the value. Idle workers are not deleted until the job exits.

Increasing the parallelism takes effect immediately if there is work that can be performed in parallel.

 

See Also:

Controlling Resource Consumption

 

Restrictions(限制)

This parameter is valid only in the Enterprise Edition of Oracle Database 10g.

Example(示例)

The following is an Example(示例) of using the PARALLEL parameter:

> expdp hr/hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log 
JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4
 

This results in a schema-mode export of the hr schema in which up to four files could be created in the path pointed to by the directory object, dpump_dir1.

PARFILE

Default: none

Purpose(目的)

Specifies the name of an export parameter file.

 

指定导出参数文件的名称。

Syntax and Description(语法和描述)
PARFILE=[directory_path]file_name
 

Unlike dump and log files, which are created and written by the Oracle database, the parameter file is opened and read by the client running the expdp image. Therefore, a directory object name is neither required nor appropriate. The directory path is an operating system-specific directory specification. The default is the user's current directory.

The use of parameter files is highly recommended if you are using parameters whose values require the use of quotation marks. See Use of Quotation Marks On the Data Pump Command Line.

Restrictions(限制)

The PARFILE parameter cannot be specified within a parameter file.

Example(示例)

The content of an Example(示例) parameter file, hr.par, might be as follows:

SCHEMAS=HR
DUMPFILE=exp.dmp
DIRECTORY=dpump_dir1
LOGFILE=exp.log
 

You could then issue the following Export command to specify the parameter file:

> expdp hr/hr parfile=hr.par

QUERY

Default: none

Purpose(目的)

Enables you to filter the data that is exported by specifying a clause for a SQL SELECT statement, which is applied to all tables in the export job or to a specific table.

用于指定过滤导出数据的where条件

Syntax and Description(语法和描述)
QUERY = [schema.][table_name:] query_clause
 

The query_clause is typically a WHERE clause for fine-grained row selection, but could be any SQL clause. For Example(示例), an ORDER BY clause could be used to speed up a migration from a heap-organized table to an index-organized table. If a [schema.]table_name is not supplied, the query is applied to (and must be valid for) all tables in the export job. A table-specific query overrides a query applied to all tables.

When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one can be specified per table. Oracle highly recommends that you place QUERY specifications in a parameter file; otherwise, you might have to use operating system-specific escape characters on the command line before each quotation mark. See Use of Quotation Marks On the Data Pump Command Line.

The query must be enclosed in single or double quotation marks.

To specify a schema other than your own in a table-specific query, you need the EXP_FULL_DATABASE role.

Restrictions(限制)

The QUERY parameter cannot be used in conjunction with the following parameters:

  • CONTENT=METADATA_ONLY
  • ESTIMATE_ONLY
  • TRANSPORT_TABLESPACES
Example(示例)

The following is an Example(示例) of using the QUERY parameter:

> expdp hr/hr parfile=emp_query.par
 

The contents of the emp_query.par file are as follows:

QUERY=employees:'"WHERE department_id > 10 AND salary > 10000"'
NOLOGFILE=y 
DIRECTORY=dpump_dir1 
DUMPFILE=exp1.dmp 
 

This Example(示例) unloads all tables in the hr schema, but only the rows that fit the query expression. In this case, all rows in all tables (except employees) in the hr schema will be unloaded. For the employees table, only rows that meet the query criteria are unloaded.

SAMPLE

Default: None

Purpose(目的)

Allows you to specify a percentage of data to be sampled and unloaded from the source database.

Syntax and Description(语法和描述)

SAMPLE=[[schema_name.]table_name:]sample_percent
 

This parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.

The sample_percent can be applied to specific tables. In the following Example(示例), 50% of the HR.EMPLOYEES table will be exported:

SAMPLE="HR"."EMPLOYEES":50
 

If you specify a schema, you must also specify a table. However, you can specify a table without specifying a schema; the current user will be assumed. If no table is specified, then the sample_percent value applies to the entire export job.

Note that you can use this parameter in conjunction with the Data Pump Import PCTSPACE transform, so that the size of storage allocations matches the sampled data subset. (See TRANSFORM.)

Restrictions(限制)

The SAMPLE parameter is not valid for network exports.

Example(示例)

In the following Example(示例), the value 70 for SAMPLE is applied to the entire export job because no table name is specified.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70

SCHEMAS

Default: current user's schema

Purpose(目的)

Specifies that you want to perform a schema-mode export. This is the default mode for Export.

    该方案用于指定执行方案模式导出,默认为当前用户方案.

Syntax and Description(语法和描述)
SCHEMAS=schema_name [, ...]
 

If you have the EXP_FULL_DATABASE role, then you can specify a single schema other than your own or a list of schema names. The EXP_FULL_DATABASE role also allows you to export additional nonschema object information for each specified schema so that the schemas can be re-created at import time. This additional information includes the user definitions themselves and all associated system and role grants, user password history, and so on. Filtering can further restrict what is exported using schema mode (see Filtering During Export Operations).

Restrictions(限制)

If you do not have the EXP_FULL_DATABASE role, then you can specify only your own schema.

Example(示例)

The following is an Example(示例) of using the SCHEMAS parameter. Note that user hr is allowed to specify more than one schema because the EXP_FULL_DATABASE role was previously assigned to it for the Purpose(目的) of these Example(示例)s.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oe 
 

This results in a schema-mode export in which the schemas, hr, sh, and oe will be written to the expdat.dmp dump file located in the dpump_dir1 directory.

STATUS

Default: 0

Purpose(目的)

Specifies the frequency at which the job status display is updated.

   指定显示导出作用进程的详细状态,默认值为0

Syntax and Description(语法和描述)
STATUS=[integer]
 

If you supply a value for integer, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, no additional information is displayed beyond information about the completion of each object type, table, or partition.

This status information is written only to your standard output device, not to the log file (if one is in effect).

Example(示例)

The following is an Example(示例) of using the STATUS parameter.

> expdp hr/hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300
 

This Example(示例) will export the hr and sh schemas and display the status of the export every 5 minutes (60 seconds x 5 = 300 seconds).

TABLES

Default: none

Purpose(目的)

Specifies that you want to perform a table-mode export.

 

   指定表模式导出

Syntax and Description(语法和描述)
TABLES=[schema_name.]table_name[:partition_name] [, ...]
 

Filtering can restrict what is exported using this mode (see Filtering During Export Operations). You can filter the data and metadata that is exported, by specifying a comma-delimited list of tables and partitions or subpartitions. If a partition name is specified, it must be the name of a partition or subpartition in the associated table. Only the specified set of tables, partitions, and their dependent objects are unloaded.

The table name that you specify can be preceded by a qualifying schema name. All table names specified must reside in the same schema. The schema defaults to that of the current user. To specify a schema other than your own, you must have the EXP_FULL_DATABASE role.

The use of wildcards is supported for one table name per export operation. For Example(示例), TABLES=emp% would export all tables having names that start with 'EMP'.

 

   Schema_name用于指定方案名,table_name用于指定导出的表名,partition_name用于指定要导出的分区名.

Restrictions(限制)

Cross-schema references are not exported. For Example(示例), a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported.

Types used by the table are not exported in table mode. This means that if you subsequently import the dump file and the TYPE does not already exist in the destination database, the table creation will fail.

The use of synonyms as values for the TABLES parameter is not supported. For Example(示例), if the regions table in the hr schema had a synonym of regn, it would not be valid to use TABLES=regn. An error would be returned.

The export of table partitions is not supported when the NETWORK_LINK parameter is used.

The export of tables that include wildcards in the table name is not supported if the table has partitions.

Example(示例)s

The following Example(示例) shows a simple use of the TABLES parameter to export three tables found in the hr schema: employees, jobs, and departments. Because user hr is exporting tables found in the hr schema, the schema name is not needed before the table names.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp TABLES=employees,jobs,departments
 

The following Example(示例) shows the use of the TABLES parameter to export partitions:

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp TABLES=sh.sales:sales_Q1_2000,sh.sales:sales_Q2_2000
 

This Example(示例) exports the partitions, sales_Q1_2000 and sales_Q2_2000, from the table sales in the schema sh.

TABLESPACES

Default: none

Purpose(目的)

Specifies a list of tablespace names to be exported in tablespace mode.

Syntax and Description(语法和描述)
TABLESPACES=tablespace_name [, ...]
 

In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its dependent objects are also unloaded. If any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Filtering can restrict what is exported using this mode (see Filtering During Export Operations).

Example(示例)

The following is an Example(示例) of using the TABLESPACES parameter. The Example(示例) assumes that tablespaces tbs_4, tbs_5, and tbs_6 already exist.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp 
TABLESPACES=tbs_4, tbs_5, tbs_6
 

This results in a tablespace export in which tables (and their dependent objects) from the specified tablespaces (tbs_4, tbs_5, and tbs_6) will be unloaded.

TRANSPORT_FULL_CHECK

Default: n

Purpose(目的)

Specifies whether or not to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export.

 

该选项用于指定被搬移表空间和未搬移表空间关联关系的检查方式,默认为N. 当设置为Y时,导出作用会检查表空间直接的完整关联关系,如果表空间所在表空间或其索引所在的表空间只有一个表空间被搬移,将显示错误信息.当设置为N时,导出作用只检查单端依赖,如果搬移索引所在表空间,但未搬移表所在表空间,将显示出错信息,如果搬移表所在表空间,未搬移索引所在表空间,则不会显示错误信息.

Syntax and Description(语法和描述)
TRANSPORT_FULL_CHECK={y | n}
 

If TRANSPORT_FULL_CHECK=y, then Export verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For Example(示例), if a table is inside the transportable set but its index is not, a failure is returned and the export operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.

If TRANSPORT_FULL_CHECK=n, then Export verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For Example(示例), a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, the export operation is terminated.

In addition to this check, Export always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by TRANSPORT_TABLESPACES are actually contained within the tablespace set.

Example(示例)

The following is an Example(示例) of using the TRANSPORT_FULL_CHECK parameter. It assumes that tablespace tbs_1 exists.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp 
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log 

TRANSPORT_TABLESPACES

Default: none

Purpose(目的)

Specifies that you want to perform a transportable-tablespace-mode export.

    指定执行表空间模式导出

Syntax and Description(语法和描述)
TRANSPORT_TABLESPACES=tablespace_name [, ...]
 

Use the TRANSPORT_TABLESPACES parameter to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.

 

 

Restrictions(限制)

Transportable jobs are not restartable.

Transportable jobs are restricted to a degree of parallelism of 1.

Transportable tablespace mode requires that you have the EXP_FULL_DATABASE role.

Example(示例) 1

The following is an Example(示例) of using the TRANSPORT_TABLESPACES parameter in a file-based job (rather than network-based). The tablespace tbs_1 is the tablespace being moved. This Example(示例) assumes that tablespace tbs_1 exists and that it has been set to read-only.

> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log

 

 

VERSION

Default: COMPATIBLE

Purpose(目的)

Specifies the version of database objects to be exported. This can be used to create a dump file set that is compatible with a previous release of Oracle Database. Note that this does not mean that Data Pump Export can be used with versions of Oracle Database prior to 10.1. Data Pump Export only works with Oracle Database 10g release 1 (10.1) or later. The VERSION parameter simply allows you to identify the version of the objects being exported.

Syntax and Description(语法和描述)
VERSION={COMPATIBLE | LATEST | version_string}
 

The legal values for the VERSION parameter are as follows:

  • COMPATIBLE - This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2 or higher.
  • LATEST - The version of the metadata corresponds to the database version.
  • version_string - A specific database version (for Example(示例), 10.0.0). In Oracle Database 10g, this value cannot be lower than 9.2.

Database objects or attributes that are incompatible with the specified version will not be exported. For Example(示例), tables containing new datatypes that are not supported in the specified version will not be exported.

 

 

Example(示例)

The following Example(示例) shows an export for which the version of the metadata will correspond to the database version:

> expdp hr/hr TABLES=hr.employees VERSION=LATEST DIRECTORY=dpump_dir1
DUMPFILE=emp.dmp NOLOGFILE=y

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Data Pump Import

 

ATTACH

Default: current job in user's schema, if there is only one running job

Purpose(目的)

Attaches the client session to an existing import job and automatically places you in interactive-command mode.

Syntax and Description(语法和描述)
ATTACH [=[schema_name.]job_name]
 

Specify a schema_name if the schema to which you are attaching is not your own. You must have the IMP_FULL_DATABASE role to do this.

A job_name does not have to be specified if only one running job is associated with your schema and the job is active. If the job you are attaching to is stopped, you must supply the job name. To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS view or the USER_DATAPUMP_JOBS view.

When you are attached to the job, Import displays a description of the job and then displays the Import prompt.

Restrictions(限制)

When you specify the ATTACH parameter, you cannot specify any other parameters except for the connection string (user/password).

You cannot attach to a job in another schema unless it is already running.

If the dump file set or master table for the job have been deleted, the attach operation will fail.

Altering the master table in any way will lead to unpredictable results.

Example(示例)

The following is an Example(示例) of using the ATTACH parameter.

> impdp hr/hr ATTACH=import_job
 

This Example(示例) assumes that a job named import_job exists in the hr schema.

 

 

CONTENT

Default: ALL

Purpose(目的)

Enables you to filter what is loaded during the import operation.

指定要导入的对象

Syntax and Description(语法和描述)
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
 
  • ALL loads any data and metadata contained in the source. This is the default.
  • DATA_ONLY loads only table row data into existing tables; no database objects are created.
  • METADATA_ONLY loads only database object definitions; no table row data is loaded.
Example(示例)

The following is an Example(示例) of using the CONTENT parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp CONTENT=METADATA_ONLY
 

This command will execute a full import that will load only the metadata in the expfull.dmp dump file. It executes a full import because that is the default for file-based imports in which no import mode is specified.

DIRECTORY

Default: DATA_PUMP_DIR

Purpose(目的)

Specifies the default location in which the import job can find the dump file set and where it should create log and SQL files.

Syntax and Description(语法和描述)
DIRECTORY=directory_object
 

The directory_object is the name of a database directory object (not the name of an actual directory). Upon installation, privileged users have access to a default directory object named DATA_PUMP_DIR. Users with access to DATA_PUMP_DIR need not use the DIRECTORY parameter at all.

A directory object specified on the DUMPFILE, LOGFILE, or SQLFILE parameter overrides any directory object that you specify for the DIRECTORY parameter. You must have Read access to the directory used for the dump file set and Write access to the directory used to create the log and SQL files.

Example(示例)

The following is an Example(示例) of using the DIRECTORY parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp 
LOGFILE=dpump_dir2:expfull.log
 

This command results in the import job looking for the expfull.dmp dump file in the directory pointed to by the dpump_dir1 directory object. The dpump_dir2 directory object specified on the LOGFILE parameter overrides the DIRECTORY parameter so that the log file is written to dpump_dir2.

 

 

DUMPFILE

Default: expdat.dmp

Purpose(目的)

Specifies the names and optionally, the directory objects of the dump file set that was created by Export.

Syntax and Description(语法和描述)
DUMPFILE=[directory_object:]file_name [, ...]
 

The directory_object is optional if one has already been established by the DIRECTORY parameter. If you do supply a value here, it must be a directory object that already exists and that you have access to. A database directory object that is specified as part of the DUMPFILE parameter overrides a value specified by the DIRECTORY parameter.

The file_name is the name of a file in the dump file set. The filenames can also be templates that contain the substitution variable, %U. If %U is used, Import examines each file that matches the template (until no match is found) in order to locate all files that are part of the dump file set. The %U expands to a 2-digit incrementing integer starting with 01.

Sufficient information is contained within the files for Import to locate the entire set, provided the file specifications in the DUMPFILE parameter encompass the entire set. The files are not required to have the same names, locations, or order that they had at export time.

Example(示例)

The following is an Example(示例) of using the Import DUMPFILE parameter. You can create the dump files used in this Example(示例) by running the Example(示例) provided for the Export DUMPFILE parameter. See DUMPFILE.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp
 

Because a directory object (dpump_dir2) is specified for the exp1.dmp dump file, the import job will look there for the file. It will also look in dpump_dir1 for dump files of the form exp2<nn>.dmp. The log file will be written to dpump_dir1.

ENCRYPTION_PASSWORD

Default: none

Purpose(目的)

Specifies a key for accessing encrypted column data in the dump file set.

Syntax and Description(语法和描述)
ENCRYPTION_PASSWORD = password
 

This parameter is required on an import operation if an encryption password was specified on the export operation. The password that is specified must be the same one that was specified on the export operation.

This parameter is not used if an encryption password was not specified on the export operation.

To use the ENCRYPTION_PASSWORD parameter, you must have Transparent Data Encryption set up. See Oracle Database Advanced Security Administrator's Guide for more information about Transparent Data Encryption.

Restrictions(限制)

The ENCRYPTION_PASSWORD parameter applies only to columns that already have encrypted data. Data Pump neither provides nor supports encryption of entire dump files.

For network imports, the ENCRYPTION_PASSWORD parameter is not supported with user-defined external tables that have encrypted columns. The table will be skipped and an error message will be displayed, but the job will continue.

The ENCRYPTION_PASSWORD parameter is not valid for network import jobs.

Encryption attributes for all columns must match between the exported table definition and the target table. For Example(示例), suppose you have a table, EMP, and one of its columns is named EMPNO. Both of the following situations would result in an error because the encryption attribute for the EMP column in the source table would not match the encryption attribute for the EMP column in the target table:

  • The EMP table is exported with the EMPNO column being encrypted, but prior to importing the table you remove the encryption attribute from the EMPNO column.
  • The EMP table is exported without the EMPNO column being encrypted, but prior to importing the table you enable encryption on the EMPNO column.
Example(示例)

In the following Example(示例), the encryption password, 123456, must be specified because it was specified when the dpcd2be1.dmp dump file was created (see "ENCRYPTION_PASSWORD").

impdp hr/hr tables=employee_s_encrypt directory=dpump_dir dumpfile=dpcd2be1.dmp  ENCRYPTION_PASSWORD=123456
 

During the import operation, any columns in the employee_s_encrypt table that were encrypted during the export operation are written as clear text.

ESTIMATE

Default: BLOCKS

Purpose(目的)

Instructs the source system in a network import operation to estimate how much data will be generated.

Syntax and Description(语法和描述)
ESTIMATE={BLOCKS | STATISTICS}
 

The valid choices for the ESTIMATE parameter are as follows:

  • BLOCKS - The estimate is calculated by multiplying the number of database blocks used by the source objects times the appropriate block sizes.
  • STATISTICS - The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.

The estimate that is generated can be used to determine a percentage complete throughout the execution of the import job.

Restrictions(限制)

The Import ESTIMATE parameter is valid only if the NETWORK_LINK parameter is also specified.

When the import source is a dump file set, the amount of data to be loaded is already known, so the percentage complete is automatically calculated.

Example(示例)

In the following Example(示例), source_database_link would be replaced with the name of a valid link to the source database.

> impdp hr/hr TABLES=job_history NETWORK_LINK=source_database_link
  DIRECTORY=dpump_dir1 ESTIMATE=statistics 
 

The job_history table in the hr schema is imported from the source database. A log file is created by default and written to the directory pointed to by the dpump_dir1 directory object. When the job begins, an estimate for the job is calculated based on table statistics.

EXCLUDE

Default: none

Purpose(目的)

Enables you to filter the metadata that is imported by specifying objects and object types that you want to exclude from the import job.

Syntax and Description(语法和描述)
EXCLUDE=object_type[:name_clause] [, ...]
 

For the given mode of import, all object types contained within the source (and their dependents) are included, except those specified in an EXCLUDE statement. If an object is excluded, all of its dependent objects are also excluded. For Example(示例), excluding a table will also exclude all indexes and triggers on the table.

The name_clause is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for Example(示例), it is applicable to TABLE and VIEW, but not to GRANT). The optional name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings. For Example(示例), you could set EXCLUDE=INDEX:"LIKE 'DEPT%'" to exclude all indexes whose names start with dept.

More than one EXCLUDE statement can be specified. Oracle recommends that you place EXCLUDE statements in a parameter file to avoid having to use operating system-specific escape characters on the command line.

As explained in the following sections, you should be aware of the effects of specifying certain objects for exclusion, in particular, CONSTRAINT, GRANT, and USER.

Excluding Constraints

The following constraints cannot be excluded:

  • NOT NULL constraints.
  • Constraints needed for the table to be created and loaded successfully (for Example(示例), primary key constraints for index-organized tables or REF SCOPE and WITH ROWID constraints for tables with REF columns).

This means that the following EXCLUDE statements will be interpreted as follows:

  • EXCLUDE=CONSTRAINT will exclude all nonreferential constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading.
  • EXCLUDE=REF_CONSTRAINT will exclude referential integrity (foreign key) constraints.
Excluding Grants and Users

Specifying EXCLUDE=GRANT excludes object grants on all object types and system privilege grants.

Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.

To exclude a specific user and all objects of that user, specify a filter such as the following (where hr is the schema name of the user you want to exclude):

EXCLUDE=SCHEMA:"= 'HR'"

If you try to exclude a user by using a statement such as EXCLUDE=USER:"= 'HR'", only CREATE USER hr DDL statements will be excluded, and you may not get the results you expect.

Restrictions(限制)

The EXCLUDE and INCLUDE parameters are mutually exclusive.

Example(示例)

Assume the following is in a parameter file, exclude.par, being used by a DBA or some other user with the IMP_FULL_DATABASE role. (If you want to try the Example(示例), you will need to create this file.)

EXCLUDE=FUNCTION
EXCLUDE=PROCEDURE
EXCLUDE=PACKAGE
EXCLUDE=INDEX:"LIKE 'EMP%' "
 

You could then issue the following command. You can create the expfull.dmp dump file used in this command by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp PARFILE=exclude.par
 

All data from the expfull.dmp dump file will be loaded except for functions, procedures, packages, and indexes whose names start with emp.

FLASHBACK_SCN

Default: none

Purpose(目的)

Specifies the system change number (SCN) that Import will use to enable the Flashback utility.

Syntax and Description(语法和描述)
FLASHBACK_SCN=scn_number
 

The import operation is performed with data that is consistent as of the specified scn_number.

 

 

Restrictions(限制)

The FLASHBACK_SCN parameter is valid only when the NETWORK_LINK parameter is also specified.

The FLASHBACK_SCN parameter pertains only to the flashback query capability of Oracle Database 10g release 1. It is not applicable to Flashback Database, Flashback Drop, or any other flashback capabilities new as of Oracle Database 10g release 1.

FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.

Example(示例)

The following is an Example(示例) of using the FLASHBACK_SCN parameter.

> impdp hr/hr DIRECTORY=dpump_dir1 FLASHBACK_SCN=123456 
NETWORK_LINK=source_database_link
 

The source_database_link in this Example(示例) would be replaced with the name of a source database from which you were importing data.

FLASHBACK_TIME

Default: none

Purpose(目的)

Specifies the time of a particular SCN.

Syntax and Description(语法和描述)
FLASHBACK_TIME="TO_TIMESTAMP()"
 

The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The import operation is performed with data that is consistent as of this SCN. Because the TO_TIMESTAMP value is enclosed in quotation marks, it would be best to put this parameter in a parameter file. Otherwise, you might need to use escape characters on the command line in front of the quotation marks. See Use of Quotation Marks On the Data Pump Command Line.

 

 

Restrictions(限制)

This parameter is valid only when the NETWORK_LINK parameter is also specified.

The FLASHBACK_TIME parameter pertains only to the flashback query capability of Oracle Database 10g release 1. It is not applicable to Flashback Database, Flashback Drop, or any other flashback capabilities new as of Oracle Database 10g release 1.

FLASHBACK_TIME and FLASHBACK_SCN are mutually exclusive.

Example(示例)

You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME procedure accepts,. For Example(示例), suppose you have a parameter file, flashback_imp.par, that contains the following:

FLASHBACK_TIME="TO_TIMESTAMP('25-08-2003 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"
 

You could then issue the following command:

> impdp hr/hr DIRECTORY=dpump_dir1 PARFILE=flashback_imp.par NETWORK_LINK=source_database_link
 

The import operation will be performed with data that is consistent with the SCN that most closely matches the specified time.

 

 

FULL

Default: Y

Purpose(目的)

Specifies that you want to perform a full database import.

Syntax and Description(语法和描述)

FULL=y

 

If you specify FULL=y, then everything from the source (either a dump file set or another database) is imported.

The IMP_FULL_DATABASE role is required on the target database and the EXP_FULL_DATABASE role is required on the source database if the NETWORK_LINK parameter is used.

If you are an unprivileged user importing from a file, only schemas that map to your own schema are imported.

FULL is the default mode when you are performing a file-based import.

Example(示例)

The following is an Example(示例) of using the FULL parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DUMPFILE=dpump_dir1:expfull.dmp FULL=y 
LOGFILE=dpump_dir2:full_imp.log
 

This Example(示例) imports everything from the expfull.dmp dump file. In this Example(示例), a DIRECTORY parameter is not provided. Therefore, a directory object must be provided on both the DUMPFILE parameter and the LOGFILE parameter. The directory objects can be different, as shown in this Example(示例).

HELP

Default: n

Purpose(目的)

Displays online help for the Import utility.

Syntax and Description(语法和描述)
HELP=y
 

If HELP=y is specified, Import displays a summary of all Import command-line parameters and interactive commands.

Example(示例)
> impdp HELP = Y
 

This Example(示例) will display a brief description of all Import parameters and commands.

INCLUDE

Default: none

Purpose(目的)

Enables you to filter the metadata that is imported by specifying objects and object types for the current import mode.

Syntax and Description(语法和描述)
INCLUDE = object_type[:name_clause] [, ...]
 

Only object types in the source (and their dependents) that are explicitly specified in the INCLUDE statement are imported.

The name_clause is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for Example(示例), it is applicable to TABLE, but not to GRANT). The optional name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings.

More than one INCLUDE statement can be specified. Oracle recommends that you place INCLUDE statements in a parameter file to avoid having to use operating system-specific escape characters on the command line.

To see a list of valid paths for use with the INCLUDE parameter, you can query the following views: DATABASE_EXPORT_OBJECTS for Full mode, SCHEMA_EXPORT_OBJECTS for schema mode, and TABLE_EXPORT_OBJECTS for table and tablespace mode.

Restrictions(限制)

The INCLUDE and EXCLUDE parameters are mutually exclusive.

Example(示例)

Assume the following is in a parameter file, imp_include.par, being used by a DBA or some other user with the IMP_FULL_DATABASE role:

INCLUDE=FUNCTION
INCLUDE=PROCEDURE
INCLUDE=PACKAGE
INCLUDE=INDEX:"LIKE 'EMP%' "
 

You can then issue the following command:

> impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp 
PARFILE=imp_include.par
 

You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

The Import operation will load only functions, procedures, and packages from the hr schema and indexes whose names start with EMP. Although this is a privileged-mode import (the user must have the IMP_FULL_DATABASE role), the schema definition is not imported, because the USER object type was not specified in an INCLUDE statement.

JOB_NAME

Default: system-generated name of the form SYS_<IMPORT or SQLFILE>_<mode>_NN

Purpose(目的)

Specifies a name for the import job. The job name is used to identify the import job in subsequent actions, such as ATTACH. The job name becomes the name of the master table in the current user's schema. The master table controls the import job.

Syntax and Description(语法和描述)
JOB_NAME=jobname_string
 

The jobname_string specifies a name of up to 30 bytes for this import job. The bytes must represent printable characters and spaces. If spaces are included, the name must be enclosed in single quotation marks (for Example(示例), 'Thursday Import'). The job name is implicitly qualified by the schema of the user performing the import operation.

The default job name is system-generated in the form SYS_<IMPORT or SQLFILE>_<mode>_NN, where NN expands to a 2-digit incrementing integer starting at 01. An Example(示例) of a default name is 'SYS_IMPORT_TABLESPACE_02'.

Example(示例)

The following is an Example(示例) of using the JOB_NAME parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp JOB_NAME=impjob01

LOGFILE

Default: import.log

Purpose(目的)

Specifies the name, and optionally, a directory object, for the log file of the import job.

Syntax and Description(语法和描述)
LOGFILE=[directory_object:]file_name
 

If you specify a directory_object, it must be one that was previously established by the DBA and that you have access to. This overrides the directory object specified with the DIRECTORY parameter. The default behavior is to create import.log in the directory referenced by the directory object specified in the DIRECTORY parameter.

If the file_name you specify already exists, it will be overwritten.

All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the STATUS command in interactive mode.)

A log file is always created unless the NOLOGFILE parameter is specified. As with the dump file set, the log file is relative to the server and not the client.

Example(示例)

The following is an Example(示例) of using the LOGFILE parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr SCHEMAS=HR DIRECTORY=dpump_dir2 LOGFILE=imp.log
 DUMPFILE=dpump_dir1:expfull.dmp
 

Because no directory object is specified on the LOGFILE parameter, the log file is written to the directory object specified on the DIRECTORY parameter.

NETWORK_LINK

Default: none

Purpose(目的)

Enables a network import when you specify the name of a valid database link to a source system.

Syntax and Description(语法和描述)
NETWORK_LINK=source_database_link
 

The NETWORK_LINK parameter initiates a network import. This means that the impdp client initiates the import request, typically to the local database. That server contacts the remote source database referenced by source_database_link, retrieves the data, and writes it directly back to the target database. There are no dump files involved.

The source_database_link provided must be the name of a valid link to a source database. If the database does not already have a database link, you or your DBA must create one. For more information about the CREATE DATABASE LINK statement, see Oracle Database SQL Reference.

If the source database is read-only, then the user on the source database must have a locally-managed tablespace assigned as a default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.

This parameter is required when any of the following parameters are specified: FLASHBACK_SCN, FLASHBACK_TIME, ESTIMATE, or TRANSPORT_TABLESPACES.

Restrictions(限制)

Network imports do not support the use of evolved types.

When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be imported (not partitions of tables).

If the USERID that is executing the import job has the IMP_FULL_DATABASE role on the target database, then that user must also have the EXP_FULL_DATABASE role on the source database.

The only types of database links supported by Data Pump Import are: public, fixed-user, and connected-user. Current-user database links are not supported.

Example(示例)

In the following Example(示例), the source_database_link would be replaced with the name of a valid database link.

> impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT
 

This Example(示例) results in an import of the employees table (excluding constraints) from the source database. The log file is written to dpump_dir1, specified on the DIRECTORY parameter.

NOLOGFILE

Default: n

Purpose(目的)

Specifies whether or not to suppress the default behavior of creating a log file.

Syntax and Description(语法和描述)
NOLOGFILE={y | n}
 

If you specify NOLOGFILE=Y to suppress creation of a log file, progress and error information is still written to the standard output device of any attached clients, including the client that started the original export operation. If there are no clients attached to a running job and you specify NOLOGFILE=Y, you run the risk of losing important progress and error information.

Example(示例)

The following is an Example(示例) of using the NOLOGFILE parameter.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp NOLOGFILE=Y
 

This command results in a full mode import (the default for file-based imports) of the expfull.dmp dump file. No log file is written because NOLOGFILE is set to y.

PARALLEL

Default: 1

Purpose(目的)

Specifies the maximum number of threads of active execution operating on behalf of the import job.

Syntax and Description(语法和描述)
PARALLEL=integer
 

The value you specify for integer specifies the maximum number of threads of active execution operating on behalf of the import job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process, idle workers, and worker processes acting as parallel execution coordinators in parallel I/O operations do not count toward this total. This parameter enables you to make trade-offs between resource consumption and elapsed time.

If the source of the import is a dump file set consisting of files, multiple processes can read from the same file, but performance may be limited by I/O contention.

To increase or decrease the value of PARALLEL during job execution, use interactive-command mode.

Parallelism is used for loading user data and package bodies, and for building indexes.

Restrictions(限制)

This parameter is valid only in the Enterprise Edition of Oracle Database 10g.

Example(示例)

The following is an Example(示例) of using the PARALLEL parameter.

> impdp hr/hr DIRECTORY=dpump_dir1 LOGFILE=parallel_import.log 
JOB_NAME=imp_par3 DUMPFILE=par_exp%U.dmp PARALLEL=3
 

This command imports the dump file set that is created when you run the Example(示例) for the Export PARALLEL parameter. (See PARALLEL.) The names of the dump files are par_exp01.dmp, par_exp02.dmp, and par_exp03.dmp.

PARFILE

Default: none

Purpose(目的)

Specifies the name of an import parameter file.

Syntax and Description(语法和描述)
PARFILE=[directory_path]file_name
 

Unlike dump files, log files, and SQL files which are created and written by the server, the parameter file is opened and read by the client running the impdp image. Therefore, a directory object name is neither required nor appropriate. The default is the user's current directory. The use of parameter files is highly recommended if you are using parameters whose values require the use of quotation marks. (See Use of Quotation Marks On the Data Pump Command Line.)

Restrictions(限制)

The PARFILE parameter cannot be specified within a parameter file.

Example(示例)

The content of an Example(示例) parameter file, hr_imp.par, might be as follows:

TABLES= countries, locations, regions
DUMPFILE=dpump_dir2:exp1.dmp,exp2%U.dmp
DIRECTORY=dpump_dir1
PARALLEL=3 
 

You could then issue the following command to execute the parameter file:

> impdp hr/hr PARFILE=hr_imp.par
 

The tables named countries, locations, and regions will be imported from the dump file set that is created when you run the Example(示例) for the Export DUMPFILE parameter. (See DUMPFILE.) The import job looks for the exp1.dmp file in the location pointed to by dpump_dir2. It looks for any dump files of the form exp2<nn>.dmp in the location pointed to by dpump_dir1. The log file for the job will also be written to dpump_dir1.

QUERY

Default: none

Purpose(目的)

Enables you to filter the data that is imported by specifying a clause for a SQL SELECT statement, which is applied to all tables in the import job or to a specific table.

Syntax and Description(语法和描述)
QUERY=[[schema_name.]table_name:]query_clause
 

The query_clause is typically a WHERE clause for fine-grained row selection, but could be any SQL clause.

If a schema and table are not supplied, the query is applied to (and must be valid for) all tables in the source dump file set or database.

When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one can be specified per table.

The query must be enclosed in single or double quotation marks. Double quotation marks are recommended, because strings within the clause must be enclosed in single quotation marks.

Oracle recommends that you place QUERY specifications in a parameter file to avoid having to use operating system-specific escape characters on the command line.

When the QUERY parameter is used, the external tables method (rather than the direct path method) is used for data access.

To specify a schema other than your own in a table-specific query, you need the IMP_FULL_DATABASE role.

Restrictions(限制)

The QUERY parameter cannot be used in conjunction with the following parameters:

  • CONTENT=METADATA_ONLY
  • SQLFILE
  • TRANSPORT_DATAFILES
Example(示例)

The following is an Example(示例) of using the QUERY parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL. Because the QUERY value uses quotation marks, Oracle recommends that you use a parameter file to avoid having to use escape characters on the command line. (See Use of Quotation Marks On the Data Pump Command Line.)

Suppose you have a parameter file, query_imp.par, that contains the following:

QUERY=departments:"WHERE department_id < 120"
 

You can then enter the following command:

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp PARFILE=query_imp.par NOLOGFILE=Y
 

Only data in expfull.dmp that meets the criteria specified in the QUERY parameter is imported.

REMAP_DATAFILE

Default: none

Purpose(目的)

Changes the name of the source datafile to the target datafile name in all SQL statements where the source datafile is referenced: CREATE TABLESPACE, CREATE LIBRARY, and CREATE DIRECTORY.

 

该选项用于将源数据文件名转变为目标数据文件名,在不同平台之间搬移表空间时可能需要该选项

Syntax and Description(语法和描述)
REMAP_DATAFILE=source_datafile:target_datafile
 

Remapping datafiles is useful when you move databases between platforms that have different file naming conventions. The source_datafile and target_datafile names should be exactly as you want them to appear in the SQL statements where they are referenced. Oracle recommends that you enclose datafile names in quotation marks to eliminate ambiguity on platforms for which a colon is a valid file specification character.

You must have the IMP_FULL_DATABASE role to specify this parameter.

Example(示例)

Because the REMAP_DATAFILE value uses quotation marks, Oracle recommends that you specify the parameter within a parameter file to avoid having to use escape characters on the command line. (See Use of Quotation Marks On the Data Pump Command Line.) For Example(示例), suppose you had a parameter file, payroll.par, with the following content:

DIRECTORY=dpump_dir1
FULL=Y
DUMPFILE=db_full.dmp
REMAP_DATAFILE='DB1$:[HRDATA.PAYROLL]tbs6.f':'/db1/hrdata/payroll/tbs6.f'
 

You can then issue the following command:

> impdp hr/hr PARFILE=payroll.par
 

This Example(示例) remaps a VMS file specification (DR1$:[HRDATA.PAYROLL]tbs6.f) to a UNIX file specification, (/db1/hrdata/payroll/tbs6.f) for all SQL DDL statements during the import. The dump file, db_full.dmp, is located by the directory object, dpump_dir1.

REMAP_SCHEMA

Default: none

Purpose(目的)

Loads all objects from the source schema into a target schema.

 

该选项用于将源方案的所有对象装载到目标方案中.

Syntax and Description(语法和描述)
REMAP_SCHEMA=source_schema:target_schema
 

Multiple REMAP_SCHEMA lines can be specified, but the source schema must be different for each one. However, different source schemas can map to the same target schema. The mapping may not be 100 percent complete, because there are certain schema references that Import is not capable of finding. For Example(示例), Import will not find schema references embedded within the body of definitions of types, views, procedures, and packages.

If the schema you are remapping to does not already exist, the import operation creates it, provided the dump file set contains the necessary CREATE USER metadata for the source schema and you are importing with enough privileges. For Example(示例), the following Export commands would create the dump file sets with the necessary metadata to create a schema, because the user SYSTEM has the necessary privileges:

> expdp SYSTEM/password SCHEMAS=hr
> expdp SYSTEM/password FULL=y
 

If your dump file set does not contain the metadata necessary to create a schema, or if you do not have privileges, then the target schema must be created before the import operation is performed. This is because the unprivileged dump files do not contain the necessary information for the import to create the schema automatically.

If the import operation does create the schema, then after the import is complete, you must assign it a valid password in order to connect to it. The SQL statement to do this, which requires privileges, is:

SQL> ALTER USER [schema_name] IDENTIFIED BY [new_pswd] 
Restrictions(限制)

Unprivileged users can perform schema remaps only if their schema is the target schema of the remap. (Privileged users can perform unrestricted schema remaps.)

For Example(示例), SCOTT can remap his BLAKE's objects to SCOTT, but SCOTT cannot remap SCOTT's objects to BLAKE.

Example(示例)

Suppose that you execute the following Export and Import commands to remap the hr schema into the scott schema:

> expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp
 
> impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp 
REMAP_SCHEMA=hr:scott
 

In this Example(示例), if user scott already exists before the import, then the Import REMAP_SCHEMA command will add objects from the hr schema into the existing scott schema. You can connect to the scott schema after the import by using the existing password (without resetting it).

If user scott does not exist before you execute the import operation, Import automatically creates it with an unusable password. This is possible because the dump file, hr.dmp, was created by SYSTEM, which has the privileges necessary to create a dump file that contains the metadata needed to create a schema. However, you cannot connect to scott on completion of the import, unless you reset the password for scott on the target database after the import completes.

REMAP_TABLESPACE

Default: none

Purpose(目的)

Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.

 

   将源表空间的所有对象导入到目标表空间中

Syntax and Description(语法和描述)
REMAP_TABLESPACE=source_tablespace:target_tablespace
 

Multiple REMAP_TABLESPACE parameters can be specified, but no two can have the same source tablespace. The target schema must have sufficient quota in the target tablespace.

Note that use of the REMAP_TABLESPACE parameter is the only way to remap a tablespace in Data Pump Import. This is a simpler and cleaner method than the one provided in the original Import utility. That method was subject to many Restrictions(限制) (including the number of tablespace subclauses) which sometimes resulted in the failure of some DDL commands.

By contrast, the Data Pump Import method of using the REMAP_TABLESPACE parameter works for all objects, including the user, and it works regardless of how many tablespace subclauses are in the DDL statement.

Restrictions(限制)

Data Pump Import can only remap tablespaces for transportable imports in databases where the compatibility level is 10.1 or later.

Example(示例)

The following is an Example(示例) of using the REMAP_TABLESPACE parameter.

> impdp hr/hr REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp 

REUSE_DATAFILES

Default: n

Purpose(目的)

Specifies whether or not the import job should reuse existing datafiles for tablespace creation.

 

   该选项指定建立表空间时是否覆盖已存在的数据文件.默认为N

Syntax and Description(语法和描述)
REUSE_DATAFILES={y | n}
 

If the default (n) is used and the datafiles specified in CREATE TABLESPACE statements already exist, an error message from the failing CREATE TABLESPACE statement is issued, but the import job continues.

If this parameter is specified as y, the existing datafiles are reinitialized. Be aware that specifying Y can result in a loss of data.

Example(示例)

The following is an Example(示例) of using the REUSE_DATAFILES parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=reuse.log
REUSE_DATAFILES=Y
 

This Example(示例) reinitializes datafiles referenced by CREATE TABLESPACE statements in the expfull.dmp file.

SCHEMAS

Default: none

Purpose(目的)

Specifies that a schema-mode import is to be performed.

Syntax and Description(语法和描述)
SCHEMAS=schema_name [,...]
 

If you have the IMP_FULL_DATABASE role, you can use this parameter to perform a schema-mode import by specifying a list of schemas to import. First, the schemas themselves are created (if they do not already exist), including system and role grants, password history, and so on. Then all objects contained within the schemas are imported. Nonprivileged users can specify only their own schemas or schemas remapped to their own schemas. In that case, no information about the schema definition is imported, only the objects contained within it.

Schema-mode is the default mode when you are performing a network-based import.

Example(示例)

The following is an Example(示例) of using the SCHEMAS parameter. You can create the expdat.dmp file used in this Example(示例) by running the Example(示例) provided for the Export SCHEMAS parameter. See SCHEMAS.

> impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp
 

The hr schema is imported from the expdat.dmp file. The log file, schemas.log, is written to dpump_dir1.

SKIP_UNUSABLE_INDEXES

Default: the value of the Oracle Database configuration parameter, SKIP_UNUSABLE_INDEXES.

Purpose(目的)

Specifies whether or not Import skips loading tables that have indexes that were set to the Index Unusable state (by either the system or the user).

 

指定导入是是否跳过不可使用的索引,默认为N

Syntax and Description(语法和描述)
SKIP_UNUSABLE_INDEXES={y | n}
 

If SKIP_UNUSABLE_INDEXES is set to y, and a table or partition with an index in the Unusable state is encountered, the load of that table or partition proceeds anyway, as if the unusable index did not exist.

If SKIP_UNUSABLE_INDEXES is set to n, and a table or partition with an index in the Unusable state is encountered, that table or partition is not loaded. Other tables, with indexes not previously set Unusable, continue to be updated as rows are inserted.

If the SKIP_UNUSABLE_INDEXES parameter is not specified, then the setting of the Oracle Database configuration parameter, SKIP_UNUSABLE_INDEXES (whose default value is Y), will be used to determine how to handle unusable indexes.

If indexes used to enforce constraints are marked unusable, then the data is not imported into that table.

Example(示例)

The following is an Example(示例) of using the SKIP_UNUSABLE_INDEXES parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log
SKIP_UNUSABLE_INDEXES=y

SQLFILE

Default: none

Purpose(目的)

Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.

Syntax and Description(语法和描述)
SQLFILE=[directory_object:]file_name
 

The file_name specifies where the import job will write the DDL that would be executed during the job. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY parameter, unless another directory_object is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten.

Note that passwords are not included in the SQL file. For Example(示例), if a CONNECT statement is part of the DDL that was executed, it will be replaced by a comment with only the schema name shown. In the following Example(示例), the dashes indicate that a comment follows, and the hr schema name is shown, but not the password.

-- CONNECT hr
 

Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hr schema (in this case, the password is also hr), as follows:

CONNECT hr/hr
 

For Streams and other Oracle database options, anonymous PL/SQL blocks may appear within the SQLFILE output. They should not be executed directly.

Example(示例)

The following is an Example(示例) of using the SQLFILE parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql
 

A SQL file named expfull.sql is written to dpump_dir2.

STATUS

Default: 0

Purpose(目的)

Displays detailed status of the job, along with a description of the current operation. An estimated completion percentage for the job is also returned.

Syntax and Description(语法和描述)
STATUS[=integer]
 

If you supply a value for integer, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, no additional information is displayed beyond information about the completion of each object type, table, or partition.

This status information is written only to your standard output device, not to the log file (if one is in effect).

Example(示例)

The following is an Example(示例) of using the STATUS parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr NOLOGFILE=y STATUS=120 DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
 

In this Example(示例), the status is shown every two minutes (120 seconds).

STREAMS_CONFIGURATION

Default: y

Purpose(目的)

Specifies whether or not to import any general Streams metadata that may be present in the export dump file.

 

指定是否导入流元数据(Stream Matadata),默认值为Y

Syntax and Description(语法和描述)
STREAMS_CONFIGURATION={y | n}
Example(示例)

The following is an Example(示例) of using the STREAMS_CONFIGURATION parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp STREAMS_CONFIGURATION=n

TABLE_EXISTS_ACTION

Default: SKIP (Note that if CONTENT=DATA_ONLY is specified, the default is APPEND, not SKIP.)

Purpose(目的)

Tells Import what to do if the table it is trying to create already exists.

 

该选项用于指定当表已经存在时导入作业要执行的操作,默认为SKIP

当设置该选项为SKIP,导入作业会跳过已存在表处理下一个对象;

当设置为APPEND,会追加数据,

TRUNCATE,导入作业会截断表,然后为其追加新数据;

当设置为REPLACE,导入作业会删除已存在表,重建表并追加数据,

 

注意,TRUNCATE选项不适用与簇表和NETWORK_LINK选项

Syntax and Description(语法和描述)
TABLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | REPLACE}
 

The possible values have the following effects:

  • SKIP leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.
  • APPEND loads rows from the source and leaves existing rows unchanged.
  • TRUNCATE deletes existing rows and then loads rows from the source.
  • REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

The following considerations apply when you are using these options:

  • When you use TRUNCATE or REPLACE, make sure that rows in the affected tables are not targets of any referential constraints.
  • When you use SKIP, APPEND, or TRUNCATE, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored. For REPLACE, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded (using EXCLUDE) and they exist in the source dump file or system.
  • When you use APPEND or TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action.

The existing table is loaded using the external tables access method if the existing tables have active constraints and triggers. However, be aware that if any row violates an active constraint, the load fails and no data is loaded.

If you have data that must be loaded, but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before reenabling the constraints.

  • When you use APPEND, the data is always loaded into new space; existing space, even if available, is not reused. For this reason, you may wish to compress your data after the load.
  • TRUNCATE cannot be used on clustered tables or over network links.
Restrictions(限制)
Example(示例)

The following is an Example(示例) of using the TABLE_EXISTS_ACTION parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
TABLE_EXISTS_ACTION=REPLACE

TABLES

Default: none

Purpose(目的)

Specifies that you want to perform a table-mode import.

Syntax and Description(语法和描述)
TABLES=[schema_name.]table_name[:partition_name]
 

In a table-mode import, you can filter the data that is imported from the source by specifying a comma-delimited list of tables and partitions or subpartitions.

If you do not supply a schema_name, it defaults to that of the current user. To specify a schema other than your own, you must have the IMP_FULL_DATABASE role.

If a partition_name is specified, it must be the name of a partition or subpartition in the associated table.

The use of wildcards with table names is also supported. For Example(示例), TABLES=emp% would import all tables having names that start with 'EMP'.

Restrictions(限制)

The use of synonyms as values for the TABLES parameter is not supported. For Example(示例), if the regions table in the hr schema had a synonym of regn, it would not be valid to use TABLES=regn. An error would be returned.

If you specify more than one table_name, they must all reside in the same schema.

The import of table partitions is not supported when the NETWORK_LINK parameter is used.

Example(示例)

The following Example(示例) shows a simple use of the TABLES parameter to import only the employees and jobs tables from the expfull.dmp file. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs
 

The following Example(示例) shows the use of the TABLES parameter to import partitions:

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp 
TABLES=sh.sales:sales_Q1_2000,sh.sales:sales_Q2_2000
 

This Example(示例) imports the partitions sales_Q1_2000 and sales_Q2_2000 for the table sales in the schema sh.

TABLESPACES

Default: none

Purpose(目的)

Specifies that you want to perform a tablespace-mode import.

Syntax and Description(语法和描述)
TABLESPACES=tablespace_name [, ...]
 

Use TABLESPACES to specify a list of tablespace names whose tables and dependent objects are to be imported from the source (full, schema, tablespace, or table-mode export dump file set or another database).

Example(示例)

The following is an Example(示例) of using the TABLESPACES parameter. It assumes that the tablespaces already exist. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLESPACES=tbs_1,tbs_2,
tbs_3,tbs_4
 

This Example(示例) imports all tables that have data in tablespaces tbs_1, tbs_2, tbs_3, and tbs_4.

TRANSFORM

Default: none

Purpose(目的)

Enables you to alter object creation DDL for specific objects, as well as for all applicable objects being loaded.

该选项用于指定是否修改建立对象的DDL语句

Transform_name用于指定转换名,其中SEGMENT_ATTRIBUTES用于标识段属性(物理属性,存储属性,表空间,日志等信息),STORAGE用于标识段存储属性,VALUE用于指定是否包含段属性或段存储属性,object_type用于指定对象类型.

Syntax and Description(语法和描述)
TRANSFORM = transform_name:value[:object_type]
 

The transform_name specifies the name of the transform. The possible options are as follows:

  • SEGMENT_ATTRIBUTES - If the value is specified as y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y.
  • STORAGE - If the value is specified as y, the storage clauses are included, with appropriate DDL. The default is y. This parameter is ignored if SEGMENT_ATTRIBUTES=n.
  • OID - If the value is specified as n, the assignment of the exported OID during the creation of object tables and types is inhibited. Instead, a new OID is assigned. This can be useful for cloning schemas, but does not affect referenced objects. The default value is y.
  • PCTSPACE - The value supplied for this transform must be a number greater than zero. It represents the percentage multiplier used to alter extent allocations and the size of data files.

Note that you can use the PCTSPACE transform in conjunction with the Data Pump Export SAMPLE parameter so that the size of storage allocations matches the sampled data subset. (See SAMPLE.)

The type of value specified depends on the transform used. Boolean values (y/n) are required for the SEGMENT_ATTRIBUTES, STORAGE, and OID transforms. Integer values are required for the PCTSPACE transform.

The object_type is optional. If supplied, it designates the object type to which the transform will be applied. If no object type is specified then the transform applies to all valid object types. The valid object types for each transform are shown in Table 3-1.

Table 3-1 Valid Object Types For the Data Pump Export TRANSFORM Parameter

 

SEGMENT_ATTRIBUTES

STORAGE

OID

PCTSPACE

CLUSTER

X

X

 

X

CONSTRAINT

X

X

 

X

INC_TYPE

 

 

X

 

INDEX

X

X

 

X

ROLLBACK_SEGMENT

X

X

 

X

TABLE

X

X

X

X

TABLESPACE

X

 

 

X

TYPE

 

 

X

 

 

Example(示例)

For the following Example(示例), assume that you have exported the employees table in the hr schema. The SQL CREATE TABLE statement that results when you then import the table is similar to the following:

CREATE TABLE "HR"."EMPLOYEES" 
   ( "EMPLOYEE_ID" NUMBER(6,0), 
     "FIRST_NAME" VARCHAR2(20), 
     "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE, 
     "EMAIL" VARCHAR2(25) CONSTRAINT "EMP_EMAIL_NN" NOT NULL ENABLE, 
     "PHONE_NUMBER" VARCHAR2(20), 
     "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE, 
     "JOB_ID" VARCHAR2(10) CONSTRAINT "EMP_JOB_NN" NOT NULL ENABLE, 
     "SALARY" NUMBER(8,2), 
     "COMMISSION_PCT" NUMBER(2,2), 
     "MANAGER_ID" NUMBER(6,0), 
     "DEPARTMENT_ID" NUMBER(4,0)
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 10240 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 121
  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "SYSTEM" ;
 

If you do not want to retain the STORAGE clause or TABLESPACE clause, you can remove them from the CREATE STATEMENT by using the Import TRANSFORM parameter. Specify the value of SEGMENT_ATTRIBUTES as n. This results in the exclusion of segment attributes (both storage and tablespace) from the table.

> impdp hr/hr TABLES=hr.employees \
  DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp \
  TRANSFORM=SEGMENT_ATTRIBUTES:n:table
 

The resulting CREATE TABLE statement for the employees table would then look similar to the following. It does not contain a STORAGE or TABLESPACE clause; the attributes for the default tablespace for the HR schema will be used instead.

CREATE TABLE "HR"."EMPLOYEES" 
   ( "EMPLOYEE_ID" NUMBER(6,0), 
     "FIRST_NAME" VARCHAR2(20), 
     "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE, 
     "EMAIL" VARCHAR2(25) CONSTRAINT "EMP_EMAIL_NN" NOT NULL ENABLE, 
     "PHONE_NUMBER" VARCHAR2(20), 
     "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE, 
     "JOB_ID" VARCHAR2(10) CONSTRAINT "EMP_JOB_NN" NOT NULL ENABLE, 
     "SALARY" NUMBER(8,2), 
     "COMMISSION_PCT" NUMBER(2,2), 
     "MANAGER_ID" NUMBER(6,0), 
     "DEPARTMENT_ID" NUMBER(4,0)
   );
 

As shown in the previous Example(示例), the SEGMENT_ATTRIBUTES transform applies to both storage and tablespace attributes. To omit only the STORAGE clause and retain the TABLESPACE clause, you can use the STORAGE transform, as follows:

> impdp hr/hr TABLES=hr.employees \
  DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp \
  TRANSFORM=STORAGE:n:table
 

The SEGMENT_ATTRIBUTES and STORAGE transforms can be applied to all applicable table and index objects by not specifying the object type on the TRANSFORM parameter, as shown in the following command:

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp \
 SCHEMAS=hr TRANSFORM=SEGMENT_ATTRIBUTES:n

TRANSPORT_DATAFILES

Default: none

Purpose(目的)

Specifies a list of datafiles to be imported into the target database by a transportable-mode import. The files must already have been copied from the source database system.

 

该选项用于指定搬移空间时要被导入到目标数据库的数据文件。

Syntax and Description(语法和描述)
TRANSPORT_DATAFILES=datafile_name
 

The datafile_name must include an absolute directory path specification (not a directory object name) that is valid on the system where the target database resides.

Example(示例)

The following is an Example(示例) of using the TRANSPORT_DATAFILES parameter. Because the TRANSPORT_DATAFILES value is enclosed in quotation marks, Oracle recommends that you use a parameter file to avoid having to use escape characters on the command line. (See Use of Quotation Marks On the Data Pump Command Line.) Assume you have a parameter file, trans_datafiles.par, with the following content:

DIRECTORY=dpump_dir1
DUMPFILE=tts.dmp
TRANSPORT_DATAFILES='/user01/data/tbs1.f'
 

You can then issue the following command:

> impdp hr/hr PARFILE=trans_datafiles.par

TRANSPORT_FULL_CHECK

Default: n

Purpose(目的)

Specifies whether or not to verify that the specified transportable tablespace set has no dependencies.

Syntax and Description(语法和描述)
TRANSPORT_FULL_CHECK={y | n}
 

If TRANSPORT_FULL_CHECK=y, then Import verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For Example(示例), if a table is inside the transportable set but its index is not, a failure is returned and the import operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.

If TRANSPORT_FULL_CHECK=n, then Import verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For Example(示例), a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, the import operation is terminated.

In addition to this check, Import always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by TRANSPORT_TABLESPACES are actually contained within the tablespace set.

Restrictions(限制)

This parameter is valid for transportable mode only when the NETWORK_LINK parameter is specified.

Example(示例)

In the following Example(示例), source_database_link would be replaced with the name of a valid database link. The Example(示例) also assumes that a datafile named tbs6.f already exists.

Because the TRANSPORT_DATAFILES value is enclosed in quotation marks, Oracle recommends that you use a parameter file to avoid having to use escape characters on the command line. (See Use of Quotation Marks On the Data Pump Command Line.) For Example(示例), assume you have a parameter file, full_check.par, with the following content:

DIRECTORY=dpump_dir1
TRANSPORT_TABLESPACES=tbs_6
NETWORK_LINK=source_database_link
TRANSPORT_FULL_CHECK=y
TRANSPORT_DATAFILES='/wkdir/data/tbs6.f'
 

You can then issue the following command:

> impdp hr/hr PARFILE=full_check.par

TRANSPORT_TABLESPACES

Default: none

Purpose(目的)

Specifies that you want to perform a transportable-tablespace-mode import.

Syntax and Description(语法和描述)
TRANSPORT_TABLESPACES=tablespace_name [, ...]
 

Use the TRANSPORT_TABLESPACES parameter to specify a list of tablespace names for which object metadata will be imported from the source database into the target database.

Restrictions(限制)

You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database into which you are importing must be at the same or higher release level as the source database.

The TRANSPORT_TABLESPACES is valid only when the NETWORK_LINK parameter is also specified.

Example(示例)

In the following Example(示例), the source_database_link would be replaced with the name of a valid database link. The Example(示例) also assumes that a datafile named tbs6.f has already been copied from the source database to the local system. Because the TRANSPORT_DATAFILES value is enclosed in quotation marks, Oracle recommends that you use a parameter file to avoid having to use escape characters on the command line. (See Use of Quotation Marks On the Data Pump Command Line.) Suppose you have a parameter file, tablespaces.par, with the following content:

DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link
TRANSPORT_TABLESPACES=tbs_6
TRANSPORT_FULL_CHECK=n
TRANSPORT_DATAFILES='user01/data/tbs6.f'
 

You can then issue the following command:

> impdp hr/hr PARFILE=tablespaces.par

VERSION

Default: COMPATIBLE

Purpose(目的)

Specifies the version of database objects to be imported. Note that this does not mean that Data Pump Import can be used with versions of Oracle Database prior to 10.1. Data Pump Import only works with Oracle Database 10g release 1 (10.1) or later. The VERSION parameter simply allows you to identify the version of the objects being imported.

Syntax and Description(语法和描述)
VERSION={COMPATIBLE | LATEST | version_string}
 

This parameter can be used to load a target system whose Oracle database is at an earlier compatibility version than that of the source system. Database objects or attributes on the source system that are incompatible with the specified version will not be moved to the target. For Example(示例), tables containing new datatypes that are not supported in the specified version will not be imported. Legal values for this parameter are as follows:

  • COMPATIBLE - This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2.0 or higher.
  • LATEST - The version of the metadata corresponds to the database version.
  • version_string - A specific database version (for Example(示例), 10.0.0). In Oracle Database 10g, this value must be 9.2.0 or higher.

 

See Also:

Moving Data Between Different Database Versions

 

Example(示例)

The following is an Example(示例) of using the VERSION parameter. You can create the expfull.dmp dump file used in this Example(示例) by running the Example(示例) provided for the Export FULL parameter. See FULL.

> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees
VERSION=LATEST

CONTINUE_CLIENT

Purpose(目的)

Changes the mode from interactive-command mode to logging mode.

Syntax and Description(语法和描述)
CONTINUE_CLIENT
 

In logging mode, the job status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT will also cause the client to attempt to start the job.

Example(示例)
Import> CONTINUE_CLIENT

EXIT_CLIENT

Purpose(目的)

Stops the import client session, exits Import, and discontinues logging to the terminal, but leaves the current job running.

Syntax and Description(语法和描述)
EXIT_CLIENT
 

Because EXIT_CLIENT leaves the job running, you can attach to the job at a later time if it is still executing or in a stopped state. To see the status of the job, you can monitor the log file for the job or you can query the USER_DATAPUMP_JOBS view or the V$SESSION_LONGOPS view.

Example(示例)
Import> EXIT_CLIENT

HELP

Purpose(目的)

Provides information about Data Pump Import commands available in interactive-command mode.

Syntax and Description(语法和描述)
HELP
 

Displays information about the commands available in interactive-command mode.

Example(示例)
Import> HELP

KILL_JOB

Purpose(目的)

Detaches all currently attached client sessions and then kills the current job. It exits Import and returns to the terminal prompt.

Syntax and Description(语法和描述)
KILL_JOB
 

A job that is killed using KILL_JOB cannot be restarted. All attached clients, including the one issuing the KILL_JOB command, receive a warning that the job is being killed by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.

Example(示例)
Import> KILL_JOB

PARALLEL

Purpose(目的)

Enables you to increase or decrease the number of active worker processes for the current job.

Syntax and Description(语法和描述)
PARALLEL=integer
 

PARALLEL is available as both a command-line parameter and an interactive-mode parameter. You set it to the desired number of parallel processes. An increase takes effect immediately if there are enough resources and if there is a sufficient amount of work requiring parallelization. A decrease does not take effect until an existing process finishes its current task. If the integer value is decreased, workers are idled but not deleted until the job exits.

 

See Also:

PARALLEL for more information about parallelism

 

Restrictions(限制)

PARALLEL is available only in Enterprise Edition.

Example(示例)
PARALLEL=10

START_JOB

Purpose(目的)

Starts the current job to which you are attached.

Syntax and Description(语法和描述)
START_JOB[=SKIP_CURRENT]
 

The START_JOB command restarts the job to which you are currently attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issue a STOP_JOB command, provided the dump file set and master table remain undisturbed.

The SKIP_CURRENT option allows you to restart a job that previously failed to restart because execution of some DDL statement failed. The failing statement is skipped and the job is restarted from the next work item.

Neither SQLFILE jobs nor transportable-tablespace-mode imports are restartable.

Example(示例)
Import> START_JOB

STATUS

Purpose(目的)

Displays the cumulative status of the job, along with a description of the current operation. A completion percentage for the job is also returned.

Syntax and Description(语法和描述)
STATUS[=integer]
 

You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, the periodic status display is turned off and status is displayed only once.

This status information is written only to your standard output device, not to the log file (even if one is in effect).

Example(示例)

The following Example(示例) will display the status every two minutes (120 seconds).

STATUS=120

STOP_JOB

Purpose(目的)

Stops the current job either immediately or after an orderly shutdown, and exits Import.

Syntax and Description(语法和描述)
STOP_JOB[=IMMEDIATE]
 

If the master table and dump file set are not disturbed when or after the STOP_JOB command is issued, the job can be attached to and restarted at a later time with the START_JOB command.

To perform an orderly shutdown, use STOP_JOB (without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.

To perform an immediate shutdown, specify STOP_JOB=IMMEDIATE. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB command, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify STOP_JOB=IMMEDIATE. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.

Example(示例)
Import> STOP_JOB=IMMEDIATE

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

实验部分

[oracle@cuug ~]$ mkdir /u01/expdp_dir

SYS @ prod >create directory expdp_dir as '/u01/expdp_dir'

remap_schema

 

SYS @ prod >create tablespace test011 datafile '/u01/app/oracle/oradata/prod/test011.dbf' size 100m;

 

Tablespace created.

 

SYS @ prod >create user test011 identified by test011 default tablespace test011;

 

User created.

 

SYS @ prod >grant dba to test011;

 

Grant succeeded.

 

TEST011 @ prod >select tname from tab;

 

no rows selected

 

TEST011 @ prod >select owner,segment_name from dba_segments where owner='TEST011';

 

no rows selected

 

[oracle@cuug ~]$  impdp test011/test011@prod  directory=expdp_dir dumpfile=expdp_prod_0709.dmp logfile=expdp_prod_0709.log remap_schema=test:test011

 

remap_schema userA:userB

会把 userA 的对象全部迁移到 userB

 

TEST011 @ prod >select tname from tab;

 

TNAME

------------------------------

T1

T2

TEST

INDEX_TEST

FATHER

CHILDEN

P

T

remap_tablespace

 

SYS @ prod >create tablespace test022 datafile '/u01/app/oracle/oradata/prod/test022.dbf' size 100m;

 

Tablespace created.

 

SYS @ prod >create user test022 identified by test022 default tablespace test022;

 

User created.

 

SYS @ prod >grant dba to test022;

 

Grant succeeded.

 

[oracle@cuug scripts]$ impdp test022/test022@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp  tables=test  table_exists_action=truncate logfile=expdp_prod_0709.log remap_tablespace=users:test022

 

使用truncate命令 必须先把对象删除才可以

把 表空间users的对象 test 迁移到 表空间 test022

 

[oracle@cuug scripts]$ impdp test022/test022@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp  tables=test  table_exists_action=replace

logfile=expdp_prod_0709.log remap_tablespace=users:test022

 

使用replace命令无需把对象删除

就可以把 表空间users的对象 test 迁移到 表空间 test022

 

 

 

 

2.0 查看用户 test的test表当前所在的表空间

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where owner='TEST';

 

 

OWNER    SEGMENT_NAME    TABLESPACE_NAME

-------- --------------- ---------------

TEST     TEST            TEST033

TEST     IND_BT          USERS

 

2.1把test用户的 test表 迁移到 表空间 user

[oracle@cuug scripts]$ cat ora_impdp_rmp_tbs.sh

 

impdp test/test@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp  tables=test table_exists_action=replace logfile=expdp_prod_0709.log remap_tablespace=test033:users

 

选项 tables 指定操作对象时 table

选项  table_exists_action 指定 如果对象存在,对 该对象 执行 replace

Remap_tablespace  指定 把  表空间 test033 的对象 迁移到 表空间 users

 

2.2查看迁移后test表在的表空间

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where owner='TEST';

 

OWNER    SEGMENT_NAME    TABLESPACE_NAME

-------- --------------- ---------------

TEST     IND_TEST_ID     USERS

TEST     TEST            USERS

 

3.0 把 test用户的所有对象从 users表空间迁移到 test033

3.0.1 查看迁移前表在哪个表空间

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where owner='TEST';

 

OWNER    SEGMENT_NAME    TABLESPACE_NAME

-------- --------------- ---------------

TEST     IND_TEST_ID     USERS

TEST     TEST            USERS

TEST     IND_BT          USERS

TEST     FATHER          USERS

TEST     CHILDEN         USERS

TEST     PK_N1           USERS

TEST     T               USERS

TEST     IND_BIT_COL     USERS

TEST     P               USERS

TEST     SYS_C005906     USERS

TEST     T1              USERS

TEST     T2              USERS

TEST     INDEX_TEST      TBS_2K

 

3.0.2 编辑脚本

[oracle@cuug scripts]$ cat ora_impdp_rmp_tbs.sh

impdp test/test@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp

schemas=test

table_exists_action=replace

logfile=expdp_0709.log

remap_tablespace=users:test033

 

选项 schemas=test 指定 操作 schema是 test

 

如果不添加,报错 ORA-39126 经查证 这是 oracle10g r2 bug

 

 

以下附带日志

[oracle@cuug scripts]$ impdp test/test@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp schemas=test table_exists_action=replace logfile=expdp_0709.log remap_tablespace=users:test033

 

Import: Release 10.2.0.1.0 - Production on Thursday, 10 July, 2014 14:50:14

 

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

 

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production

With the Partitioning, OLAP and Data Mining options

Master table "TEST"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded

Starting "TEST"."SYS_IMPORT_SCHEMA_01":  test/********@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp schemas=test table_exists_action=replace logfile=expdp_0709.log remap_tablespace=users:test033

Processing object type DATABASE_EXPORT/SCHEMA/USER

ORA-31684: Object type USER:"TEST" already exists

Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT

Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT

Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE

Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

. . imported "TEST"."TEST"                               37.56 MB  403168 rows

. . imported "TEST"."T1"                                 4.992 KB      10 rows

. . imported "TEST"."CHILDEN"                            7.820 KB      14 rows

. . imported "TEST"."FATHER"                             5.679 KB       5 rows

. . imported "TEST"."INDEX_TEST"                         6.429 KB      10 rows

. . imported "TEST"."P"                                  4.937 KB       2 rows

. . imported "TEST"."T"                                  6.289 KB     100 rows

. . imported "TEST"."T2"                                 4.992 KB      10 rows

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ

ORA-39083: Object type PROCOBJ failed to create with error:

ORA-27477: "TEST.JOB_INSERT_T1" already exists

Failing sql is:

BEGIN

dbms_scheduler.create_job('"JOB_INSERT_T1"',

job_type=>'STORED_PROCEDURE', job_action=>

'sys.chk_lock'

, number_of_arguments=>0,

start_date=>'19-JUN-14 10.20.00.000000 AM +08:00', repeat_interval=>

NULL

, end_date=>NULL,

job_class=>'"DEFAULT_JOB_CLASS"', enabled=>FALSE, auto_drop=>TRUE,comments=>

'My new job'

);COMMIT; END;

 

 

Job "TEST"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at 14:50:59

 

 

3.0.3 查看迁移后表在哪个表空间

 

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where owner='TEST';

 

OWNER    SEGMENT_NAME              TABLESPACE_NAME

-------- ------------------------- ---------------

TEST     T1                        TEST033

TEST     FATHER                    TEST033

TEST     CHILDEN                   TEST033

TEST     T                         TEST033

TEST     P                         TEST033

TEST     T2                        TEST033

TEST     TEST                      TEST033

 

 

 

 

3.1把test用户的对象从test033表空间迁移到 test022表空间

3.1.1 查看迁移前 表在哪个表空间

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where owner='TEST' and segment_type='TABLE';

 

OWNER    SEGMENT_NAME              TABLESPACE_NAME

-------- ------------------------- ---------------

TEST     T1                        TEST033

TEST     FATHER                    TEST033

TEST     CHILDEN                   TEST033

TEST     T                         TEST033

TEST     P                         TEST033

TEST     T2                        TEST033

TEST     TEST                      TEST033

 

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where tablespace_name='TEST022' and segment_type='TABLE';

 

no rows selected

3.1.2 编辑脚本

[oracle@cuug scripts]$ cat ora_impdp_rmp_tbs.sh

impdp test/test@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp

schemas=test

table_exists_action=replace

logfile=expdp_0709.log

remap_tablespace=test033:test022

 

选项 schemas=test 指定 操作 schema是 test

选项 tables=test 指定操作的对象时 表 test

选项 table_exists_action=replace 指定如果该对象存在那么对该对象执行 replace

选项 remap_tablespace=users:test033 指定 对象从 users表空间迁移到 test033 表空间

 

3.1.3 查看迁移后表在哪个表空间

 

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where owner='TEST' and segment_name=’TEST’;

 

OWNER    SEGMENT_NAME    TABLESPACE_NAME

-------- --------------- ---------------

TEST     P                         USERS

TEST     T                         USERS

TEST     T2                        USERS

TEST     TEST                      USERS

TEST     FATHER                    USERS

TEST     CHILDEN                   USERS

TEST     T1                        USERS

 

虽然指定 remap_tablespace  但是 对象还是 被迁移到 原先的表空间

迁移所有对象 报错ora-39126  oracle bug

迁移单个对象 正常

 

3.3把test用户的对象从test033表空间迁移到 test022表空间

[oracle@cuug scripts]$ impdp test/test@prod   directory=expdp_dir dumpfile=expdp_prod_0709.dmp schemas=test   table_exists_action=replace logfile=expdp_0709.log remap_tablespace=user033:test022

 

 

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where tablespace_name='TEST022' and segment_type='TABLE';

 

OWNER    SEGMENT_NAME              TABLESPACE_NAME

-------- ------------------------- ---------------

TEST     T1                        TEST022

TEST     T2                        TEST022

TEST     TEST                      TEST022

TEST     FATHER                    TEST022

TEST     CHILDEN                   TEST022

TEST     P                         TEST022

TEST     T                         TEST022

 

3.4把test用户的对象从test022表空间迁移到 test033表空间

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where tablespace_name='TEST033' and segment_type='TABLE';

 

OWNER    SEGMENT_NAME              TABLESPACE_NAME

-------- ------------------------- ---------------

TEST033  SYS_IMPORT_FULL_01        TEST033

TEST033  SYS_IMPORT_FULL_02        TEST033

TEST033  SYS_IMPORT_FULL_03        TEST033

TEST033  SYS_IMPORT_FULL_04        TEST033

TEST01   TEST                      TEST033

 

 

[oracle@cuug scripts]$ impdp test/test@prod directory=expdp_dir dumpfile=expdp_prod_0709.dmp schemas=test table_exists_action=replace logfile=expdp_0709.log remap_tablespace=test022:test033

 

SYS @ prod >select owner,segment_name,tablespace_name from dba_segments where tablespace_name='TEST022' and segment_type='TABLE';

 

no rows selected

 

 

转载于:https://www.cnblogs.com/iyoume2008/p/4690600.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值