文档笔记--Oracle Data Pump 1

阅读文档时,写写笔记而已。一般只写综述的那一章。

Oracle® Database Utilities
10g Release 2 (10.2)

Part Number B14215-01

1 Overview of Oracle Data Pump

Data Pump Components

Oracle Data Pump is made up of three distinct parts:

  • The command-line clients, expdp and  impdp
  • The DBMS_DATAPUMP PL/SQL package  (also known as the Data Pump API)
  • The DBMS_METADATA PL/SQL package  (also known as the Metadata API)

Oracle Data Pump由上述三部分组成

The Data Pump clients, expdp and impdp, invoke the Data Pump Export utility and Data Pump Import utility, respectively. They provide a user interface that closely resembles the original export (exp) and import (imp) utilities.

The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command-line. These parameters enable the exporting and importing of data  and metadata  for a complete database or subsets of a database.

实质是调用PL/SQL  package,使用命令行里输入的参数

可以导入和导出数据和元数据


Note:
All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string.This means that, for nonprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system.For privileged users, a default directory object is available. See Default Locations for Dump, Log, and SQL Files for more information about directory objects.


When data is moved, Data Pump automatically uses either direct path load (or unload) or the external tables mechanism, or a combination of both. When metadata is moved, Data Pump uses functionality provided by the DBMS_METADATA PL/SQL package. The DBMS_METADATA package provides a centralized facility for the extraction, manipulation, and resubmission of dictionary metadata.

对于数据,Data Pump 自动选择direct path 、external tables 或混合两种同时使用

对于元数据,Data Pump 使用DBMS_METADATA PL/SQL包提供的功能

The DBMS_DATAPUMP and  DBMS_METADATA  PL/SQL packages can be used independently of the Data Pump clients.

两个包可以通过client独立调用


See Also:

Oracle Database PL/SQL Packages and Types Reference for descriptions of the DBMS_DATAPUMP andDBMS_METADATA packages


 

What New Features Do Data Pump Export and Import Provide?

The new Data Pump Export and Import utilities (invoked with the expdp and impdp commands, respectively) have a similar look and feel to the original Export (exp) and Import (imp) utilities, but they are completely separate. Dump files generated by the new Data Pump Export utility are not compatible with dump files generated by the original Export utility. Therefore, files generated by the original Export (exp) utility cannot be imported with the Data Pump Import (impdp) utility.

exp/imp与expdp/impdp不兼容,所以,由原先的exp生成的文件无法使用impdp导入

Oracle  recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features, except for XML schemas and XML schema-based tables. Original Export and Import support the full set of Oracle database release 9.2 features. Also, the design of Data Pump Export and Import results in greatly enhanced data movement performance over the original Export and Import utilities.

Oracle建议使用新的expdp/impdp,除了XML schemas and XML schema-based tables;新的expdp/impdp在性能上有极大地提高

The following are the major new features that provide this increased performance, as well as enhanced ease of use:

【以下是新特性和优势】

  • The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. See PARALLEL for information about using this parameter in export. See PARALLEL for information about using this parameter in import. (This feature is available only in the Enterprise Edition of Oracle Database 10g.)

【1、使用PARALLEL参数指定最大的活动线程,以让你能够在资源利用和时间上进行调节,这只在10g企业版中可以使用

  • The ability to restart Data Pump jobs. See START_JOB for information about restarting export jobs. See START_JOB for information about restarting import jobs.

【2、能够重新开始Data Pump任务】

  • The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job. See ATTACH for information about using this parameter in export. See ATTACH for information about using this parameter in import.

【3、客户端就可以退出或重新连接,而不影响任务任

  • Support for export and import operations over the network, in which the source of each operation is a remote instance. See NETWORK_LINK for information about using this parameter in export. See NETWORK_LINK for information about using this parameter in import.

【4、支持通过网络对源数据是远端instance的导入导出操作

  • The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced. See REMAP_DATAFILE.

【5、在导入时,可以修改数据文件的名字

  • Enhanced support for remapping tablespaces during an import operation. See REMAP_TABLESPACE.

【6、在导入时,增强了remap表空间的操作】

  • Support for filtering the metadata that is exported and imported, based upon objects and object types. For information about filtering metadata during an export operation, see INCLUDE and EXCLUDE. For information about filtering metadata during an import operation, see INCLUDE and EXCLUDE.

【7、导入导出时,支持过滤元数据

  • Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs. See Commands Available in Export's Interactive-Command Mode and Commands Available in Import's Interactive-Command Mode.

【8、支持对正在进行的任务进行交互式的命令行

  • The ability to estimate how much space an export job would consume, without actually performing the export. See ESTIMATE_ONLY.

【9、能够不执行导出,而估算出导出所需的空间

  • The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported. See VERSION for more information about using this parameter in export.

【10、能够在导出时通过指定database版本来选择性导出,为了向低版本的数据库中导入数据】

In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database. See VERSION for more information about using this parameter in import.

For additional information about using different versions, see Moving Data Between Different Database Versions.

  • Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.) See Default Locations for Dump, Log, and SQL Files for information about some of the implications of server-based operations.

【11、大多数 Data Pump 的导入导出任务是在server端进行的,而原先的导入导出主要在客户端进行】

The remainder of this chapter discusses Data Pump technology as it is implemented in the Data Pump Export and Import utilities. To make full use of Data Pump technology, you must be a privileged user. Privileged users have the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. Nonprivileged users have neither.

【为了更好的使用Data Pump技术,你必须是获得权限的用户,即EXP_FULL_DATABASE和IMP_FULL_DATABASE的角色】

Privileged users can do the following:

【获得权限的用户可以做到:】

  • Export and import database objects owned by others

    【导入导出其它用户的内容】

  • Export and import nonschema-based objects such as tablespace and schema definitions, system privilege grants, resource plans, and so forth

    【导入导出nonschema-based objects:如表空间和schema的定义,系统权限, resource plans 

  • Attach to, monitor, and control Data Pump jobs initiated by others

    【接触、监控、管理其他人建立的Data Pump任务】

  • Perform. remapping operations on database datafiles

    【对 datafiles进行remap

  • Perform. remapping operations on schemas other than their own

    【对其他人的datafiles进行remap】

How Does Data Pump Access Data?

Data Pump supports two access methods to load and unload table row data: direct path and external tables.Because both methods support the same external data representation, data that is unloaded with one method can be loaded using the other method. Data Pump automatically chooses the fastest method appropriate for each table.

【Data Pump 获得数据两种方式:直接路径和外部表,为每个表自动选择最快的方式


Note:
Data Pump will not load tables with disabled unique indexes. If the data needs to be loaded into the table, the indexes must be either dropped or reenabled.

Data Pump不会加载有diabled唯一索引的表


Direct Path Loads and Unloads

The Oracle database has provided direct path unload capability for export operations since Oracle release 7.3 and a direct path loader API for OCI since Oracle8i. Data Pump technology enhances direct path technology in the following ways:

Data Pump 在direct path的改进

  • Support of a direct path, proprietary format unload.

    支持direct path, proprietary format unload

  • Improved performance through elimination of unnecessary conversions. This is possible because the direct path internal stream format is used as the format stored in the Data Pump dump files.

    减少不必要的转换,提高性能

  • Support of additional datatypes and transformations.

    支持额外的数据类型和转换

The default method that Data Pump uses for loading and unloading data is direct path, when the structure of a table allows it. Note that if the table has any columns of datatype LONG, then direct path must be used.

当表的结构允许时,默认的方式是 direct path;当表有LONG字段时,必须使用direct path

The following sections describe situations in which direct path cannot be used for loading and unloading.

Situations in Which Direct Path Load Is Not

If any of the following conditions exist for a table, Data Pump uses external tables rather than direct path to load the data for that table:

Load时,以下情形,Data Pump 使用external tables 而非direct path】

  • A global indexon multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
  • A domain indexexists for a LOB column.
  • A table is in a   cluster.
  • There is an active triggeron a pre-existing table.
  • Fine-grained access control is enabled in insert mode on a pre-existing table.
  • A table contains BFILEcolumns or columns of opaque types.
  • A  referential integrity constraint is present on a pre-existing table.
  • A table contains VARRAYcolumns with an embedded opaque type.
  • The table has  encrypted columns
  • The table into which data is being imported is  a pre-existing table  and at least one of the following conditions exists:
    • There is an active trigger
    • The table is partitioned
    • fgac is in insert mode
    • A referential integrity constraint exists
    • A unique index exists
  • Supplemental loggingis enabled and the table has   at least 1 LOB column.

Situations in Which Direct Path Unload Is Not Used

If any of the following conditions exist for a table, Data Pump uses the external table method to unload data, rather than direct path:

unload以下情形,Data Pump 使用external tables 而非direct path】

  • Fine-grained accesscontrol for SELECT is enabled.
  • The table is a queue table.
  • The table contains one or more columns of type BFILE or opaque,or an object type containing opaque columns.
  • The table contains encryptedcolumns.
  • A column of an evolved type that needs upgrading.
  • A column of type LONG or LONG RAW  that is not last.

External Tables

External TablesThe Oracle database has provided an external tables capability since Oracle9i that allows reading of data sources external to the database. As of Oracle Database 10g, the external tables feature also supports writing database data to destinations external to the database. Data Pump provides an external tables access driver (ORACLE_DATAPUMP) that reads and writes files.The format of the files is the same format used with the direct path method.This allows for high-speed loading and unloading of database tables. Data Pump uses external tables as the data access mechanism in the following situations:

【自9i其支持从外部数据源读数据,到了10g,支持向外部写数据。Data Pump 提供外部表的driver(ORACLE_DATAPUMP)来读和写。以下情形Data Pump 使用外部表作为获取数据的机制:】

  • Loading and unloadingvery largetables and partitions in situations whereparallel SQLcan be used to advantage
  • Loading tables withglobal or domain indexesdefined on them, including partitioned object tables
  • Loading tables withactive triggers or clustered tables
  • Loading and unloading tables withencryptedcolumns
  • Loading tables withfine-grained access controlenabled for inserts
  • Loading tables thatare partitioned differently at load time and unload time


Note:

 When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP access driver.However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL statement. One of the reasons for this is that a manually created external table unloads only data (no metadata), whereas Data Pump maintains both data and metadata information for all objects involved.

【Data Pump 使用ORACLE_DATAPUMP access driver来提供外部表机制。

Data Pump用外部表创建的文件与你用SQL CREATE TABLE ... ORGANIZATION EXTERNAL手工创建的外部表不兼容。其中的一个原因是,你手工创建的外部表没有元数据,而Data Pump有

See Also:
Chapter 14, "The ORACLE_DATAPUMP Access Driver"


Accessing Data Over a Database Link

When you perform. an export over a database link, the data from the source database instance is written to dump files on the connected database instance. In addition, the source database can be a read-only database.

When you perform. an import over a database link, the import source is a database, not a dump file set, and the data is imported to the connected database instance.

Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.


See Also:
NETWORK_LINK for information about performing exports over a database link

NETWORK_LINK for information about performing imports over a database link


 


来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23650854/viewspace-682833/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/23650854/viewspace-682833/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值