记一次impdp导入数据时的ORA-31696错误

今天帮一同事从一个dump文件中导入数据到测试库中,结果死活出现ORA-31696的错误:

 

[racdb2@oracle]$ impdp pebank/pebank directory=dumpdir dumpfile=mcj123.1011.dmp remap_schema=ebank:pebank table_exists_action=append

 

Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 16 October, 2012 14:47:23

 

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

 

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

Master table "PEBANK"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

Starting "PEBANK"."SYS_IMPORT_FULL_01":  pebank/******** directory=dumpdir dumpfile=mcj123.1011.dmp remap_schema=ebank:pebank table_exists_action=append

Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39152: Table "PEBANK"."MCJNL" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

ORA-39152: Table "PEBANK"."MCJNLDATA" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

ORA-39152: Table "PEBANK"."MCJNLQUERYLOG" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

ORA-31696: unable to export/import TABLE_DATA:"PEBANK"."MCJNLQUERYLOG" using client specified AUTOMATIC method

ORA-31696: unable to export/import TABLE_DATA:"PEBANK"."MCJNLDATA" using client specified AUTOMATIC method

ORA-31693: Table data object "PEBANK"."MCJNL":"MCJNL_2011_X" failed to load/unload and is being skipped due to error:

ORA-00001: unique constraint (PEBANK.PK_MCJNL1) violated

. . imported "PEBANK"."MCJNL":"MCJNL_2009_10"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2009_11"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2009_12"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_01"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_02"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_03"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_04"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_05"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_06"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_07"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_08"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_09"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_10"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_11"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_12"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_01"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_02"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_03"                0 KB       0 rows

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "PEBANK"."SYS_IMPORT_FULL_01" completed with 6 error(s) at 14:51:44

 

发现其中两个表无法导入,提示ORA-31696的错误,无法使用客户端的“自动选择”方法导出/导入表。

查了一下发现这是Oracle的一个Bug:Bug 4239903 : IMPDP FAILED IF LONG DATATYPE IS THERE INTHE TABLE,因为这两个表中均存在LONG类型,且我使用了table_exusts_action选项。

这个Bug是指在版本10.1.0.2 to 10.2.0.4中,使用impdp导入带有LONG类型的表时,如果目标库中已存在该表而使用table_exists_action=append时,会出现ORA-31696的错误。

参考一下官方文档,可知,在版本10.1.0.2to 10.2.0.4中,如果使用impdp导入带有LONG类型的表时,目标库中如果已存在该表,则需要需要使用table_exists_action=replace选项,也不能使用先content=metadata_only再content=data_only的方式两者需要同时进行,或者可以先删除或者禁用表上的约束,或者您可以使用原始的exp/imp工具替代expdp/impdp工具,或者您也可以把数据库升级到10.2.0.5来Fixed这个Bug:


DataPump Import (IMPDP) Fails For Table With Column Datatype LONG With Error ORA-31696 [ID 305819.1]

转到底部


修改时间: 2012-3-9 类型:PROBLEM 状态:PUBLISHED优先级:3

注释 (0)

In this Document
  Symptoms
  Cause
  Solution
  References


Appliesto:

Oracle Server - Enterprise Edition - Version:10.1.0.2 to 10.2.0.4 - Release: 10.1 to 10.2
Information in this document applies to any platform.

Symptoms

DataPump importfails with error ORA-31696 while loading data into pre-existing table, if thereis a LONG column in that table. This is demonstrated by the following example:

connect / as sysdba

create user test identified by testdefault tablespace users temporary tablespace temp;
grant connect, resource to test;

create or replace directory tmp as '/tmp';
grant read, write on directory tmp totest;

connect test/test

-- create table with LONG column
create table a_tab
(
   id    number,
   text_v varchar2(10),
   text_l long
);
alter table a_tab add constrainta_tab_pk primary key (id);

-- populate the table
begin
  for i in 1..10 loop
    insert into a_tabvalues (i, 'Text '||lpad (to_char (i), 5, '0'), 'Text LONG '||lpad (to_char(i), 990, '0'));
  end loop;
  commit;
end;
/

set long 1000

select * from a_tab;

ID        TEXT_V
---------- ----------
TEXT_L
--------------------------------------------------------------------------------
        1 Text 00001
Text LONG0000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
....
0000000000000000000000000000000000000001

        2 Text 00002
Text LONG0000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000002
....

10 rows selected.


Export the table with:

#> expdp test/test directory=tmp dumpfile=a_tab.dmpcontent=data_only tables=a_tab logfile=expdp_a_tab.log


Then:

truncate table a_tab;


and import the data with:

#> impdp test/test directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log


This fails with error:

Import: Release10.2.0.1.0 - 64bit Production on Friday, 09 March, 2012 9:58:40

Copyright (c) 2003, 2005, Oracle. All rightsreserved.

Connected to: Oracle Database 10g EnterpriseEdition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Miningoptions
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":test/******** directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
ORA-31696: unable to export/importTABLE_DATA:"TEST"."A_TAB" using client specified AUTOMATICmethod
Job"TEST"."SYS_IMPORT_FULL_01" completed with 1 error(s) at09:58:43

Cause

The following restrictions exist regarding dataload into pre-existing table:

- cannot use external table mode if there is a LONG column
- cannot use direct path load mode if an enabled constraint other than tablecheck constraint is present on pre-existing table

Due to these restrictions, the procedure KUPD$DATA_INT.SELECT_MODE returns'load_nopossible', and DataPump import fails with the error message.

Solution

Please choose oneof the following options:

1. Import both metadata and data at once, if the table has a LONG column and anenabled constraint.

Or:

2. First disable (or drop) the constraints on existing table and then start theimport.

Or:

3. Use the original export/import (exp/imp) to transfer the table from sourceto target.

Or:

4. Beginning with version 10.2.0.5, importing data in a pre-existing table withLONG column is possible. The same test above returns during import:

Import: Release 10.2.0.5.0- 64bit Production on Friday, 09 March, 2012 9:57:07

Copyright (c) 2003, 2007, Oracle. All rightsreserved.

Connected to: Oracle Database 10g EnterpriseEdition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting"TEST"."SYS_IMPORT_FULL_01": test/******** directory=tmpdumpfile=a_tab.dmp full=y table_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported"TEST"."A_TAB"         15.46 KB          10 rows
Job"TEST"."SYS_IMPORT_FULL_01" successfully completed at09:57:12

 

Import: Release11.1.0.7.0 - 64bit Production on Friday, 09 March, 2012 9:49:23

Copyright (c) 2003, 2007, Oracle. All rightsreserved.

Connected to: Oracle Database 11g EnterpriseEdition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table "TEST"."SYS_IMPORT_FULL_01"successfully loaded/unloaded
Starting"TEST"."SYS_IMPORT_FULL_01": test/******** directory=tmpdumpfile=a_tab.dmp full=y table_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."A_TAB"         15.75 KB          10 rows
Job"TEST"."SYS_IMPORT_FULL_01" successfully completed at09:49:34

 

Import: Release11.2.0.3.0 - Production on Fri Mar 9 09:45:23 2012

Copyright (c) 1982, 2011, Oracle and/or itsaffiliates. All rights reserved.

Connected to: Oracle Database 11g EnterpriseEdition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":test/******** directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported"TEST"."A_TAB"         15.74 KB          10 rows
Job "TEST"."SYS_IMPORT_FULL_01"successfully completed at 09:45:31

References

BUG:4239903 - IMPDP FAILED IF LONG DATATYPE ISTHERE IN THE TABLE

 

扩展阅读一下:

Export/Import DataPump Parameter ACCESS_METHOD - How toEnforce a Method of Loading and Unloading Data ? [ID 552424.1]

转到底部


修改时间:2011-8-26类型:HOWTO状态:PUBLISHED优先级:3

注释 (0)

In this Document
  Goal
  Solution
     1.Introduction. 
     2.Export Data Pump: unloading data in "Direct Path" mode. 
     3.Export Data Pump: unloading data in "External Tables" mode. 
     4.Import Data Pump: loading data in "Direct Path" mode. 
     5. ImportData Pump: loading data in "External Tables" mode. 
     6.How to enforce a specific load/unload method ? 
     7.Known issues.
     @8. For Support: Enhancement Requests.
  References


Appliesto:

Oracle Server - Enterprise Edition - Version:10.1.0.2 to 11.2.0.2 - Release: 10.1 to 11.2
Oracle Server - Personal Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server - Standard Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Enterprise Manager for RDBMS - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server - Enterprise Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 7-Feb-2011***

Goal

Starting with Oracle10g, Oracle Data Pump can beused to move data in and out of a database. Data Pump can make use of differentmethods to move the data, and will automatically choose the fastest method. Itis possible though, to manually enforce a specific method. This documentdemonstrates how to specify the method with which data will be loaded orunloaded with Data Pump.

Solution

1. Introduction.

Data Pump can use four mechanisms to move data in and out of adatabase:

  • Data file copying;
  • Direct path;
  • External tables;
  • Network link import.

The two most commonly used methods to move data in and out ofdatabases with Data Pump are the "Direct Path" method and the"External Tables" method.

1.1. Direct Path mode. 
After data file copying, direct path is the fastest method of moving data. Inthis method, the SQL layer of the database is bypassed and rows are moved toand from the dump file with only minimal interpretation. Data Pumpautomatically uses the direct path method for loading and unloading data whenthe structure of a table allows it.

1.2. External Tables mode. 
If data cannot be moved in direct path mode, or if there is a situation whereparallel SQL can be used to speed up the data move even more, then the externaltables mode is used. The external table mechanism creates an external tablethat maps the dump file data for the database table. The SQL engine is thenused to move the data. If possible, the APPEND hint is used on import to speedthe copying of the data into the database. 
Note: When the Export NETWORK_LINK parameter is used to specify a network linkfor an export operation, a variant of the external tables method is used. Inthis case, data is selected from across the specified network link and insertedinto the dump file using an external table.

1.3. Data File Copying mode.
This mode is used when a transport tablespace job is started, i.e.: theTRANSPORT_TABLESPACES parameter is specified for an Export Data Pump job. Thisis the fastest method of moving data because the data is not interpreted noraltered during the job, and Export Data Pump is used to unload only structuralinformation (metadata) into the dump file.

1.4. Network Link Import mode. 
This mode is used when the NETWORK_LINK parameter is specified during an ImportData Pump job. This is the slowest of the four access methods because thismethod makes use of an INSERT SELECT statement to move the data over a databaselink, and reading over a network is generally slower than reading from a disk.

The "Data File Copying" and "Network LinkImport" methods to move data in and out of databases are outside the scopeof this article, and therefore not discussed any further.

For details about the access methods of the classic export client(exp), see:
Note:155477.1 "Parameter DIRECT: ConventionalPath Export Versus Direct Path Export"

2. Export Data Pump: unloading data in "Direct Path"mode.

Export Data Pump will use the "Direct Path" mode tounload data in the following situations:

EXPDP will useDIRECT_PATH mode if:

2.1. The structure of a table allows a Direct Path unload, i.e.: 
     - The table does not have fine-grained access controlenabled for SELECT. 
     - The table is not a queue table. 
     - The table does not contain one or more columns oftype BFILE or opaque, or an object type containing opaque columns. 
     - The table does not contain encrypted columns. 
     - The table does not contain a column of an evolved typethat needs upgrading. 
     - If the table has a column of datatype LONG or LONG RAW,then this column is the last column. 

2.2. The parameters QUERY, SAMPLE, or REMAP_DATA parameter were not used forthe specified table in the Export Data Pump job. 

2.3. The table or partition is relatively small (up to 250 Mb), or the table orpartition is larger, but the job cannot run in parallel because the parameterPARALLEL was not specified (or was set to 1). 

Note that with an unload of data in Direct Path mode, parallel I/Oexecuation Processes (PX processes) cannot be used to unload the data inparallel (paralllel unload is not supported in Direct Path mode). 

3. Export Data Pump: unloading data in "ExternalTables" mode.

Export Data Pump will use the "External Tables" mode tounload data in the following situations:

EXPDP willuse EXTERNAL_TABLE mode if:

3.1. Data cannot be unloaded in Direct Path mode, because of the structure ofthe table, i.e.: 
     - Fine-grained access control for SELECT is enabledfor the table. 
     - The table is a queue table. 
     - The table contains one or more columns of type BFILE oropaque, or an object type containing opaque columns. 
     - The table contains encrypted columns. 
     - The table contains a column of an evolved type that needsupgrading. 
     - The table contains a column of type LONG or LONG RAW thatis not last. 

3.2. Data could also have been unloaded in "Direct Path" mode, butthe parameters QUERY, SAMPLE, or REMAP_DATA were used for the specified tablein the Export Data Pump job. 

3.3. Data could also have been unloaded in "Direct Path" mode, butthe table or partition is relatively large (> 250 Mb) and parallel SQL canbe used to speed up the unload even more. 

Note that with an unload of data in External Tables mode, parallelI/O execuation Processes (PX processes) can be used to unload the data inparallel. In that case the Data Pump Worker process acts as the coordinator forthe PX processes. However, this does not apply when the table has a LOB column:in that case the table parallelism will always be 1. See also: 
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN" 

4. Import Data Pump: loading data in "Direct Path"mode.

Import Data Pump will use the "Direct Path" mode to loaddata in the following situations:

IMPDP willuse DIRECT_PATH if:

4.1. The structure of a table allows a Direct Path load, i.e.: 
     - A global index does not exist on a multipartition tableduring a single-partition load. This includes object tables that arepartitioned. 
     - A domain index does not exist for a LOB column. 
     - The table is not in a cluster. 
     - The table does not have BFILE columns or columns ofopaque types. 
     - The table does not have VARRAY columns with an embeddedopaque type. 
     - The table does not have encrypted columns. 
     - Supplemental logging is not enabled or supplementallogging is enabled and the table does not have a LOB column. 
     - The table into which data is being imported is apre-existing table and: 
        – There is not an active trigger, and: 
        – The table is partitioned and has anindex, and: 
        – Fine-grained access control for INSERTmode is not enabled, and: 
        – A constraint other than table checkdoes not exist, and: 
        – A unique index does not exist. 

4.2 The parameters QUERY, REMAP_DATA parameter were not used for the specifiedtable in the Import Data Pump job. 

4.3. The table or partition is relatively small (up to 250 Mb), or the table orpartition is larger, but the job cannot run in parallel because the parameterPARALLEL was not specified (or was set to 1). 


5. Import Data Pump: loading data in "External Tables" mode.

Import Data Pump will use the "External Tables" mode toload data in the following situations:

IMPDP willuse EXTERNAL_TABLE if:

5.1. Data cannot be loaded in Direct Path mode, because at least one of thefollowing conditions exists: 
     - A global index on multipartition tables exists during asingle-partition load. This includes object tables that are partitioned. 
     - A domain index exists for a LOB column. 
     - A table is in a cluster. 
     - A table has BFILE columns or columns of opaque types. 
     - A table has VARRAY columns with an embedded opaque type. 
     - The table has encrypted columns. 
     - Supplemental logging is enabled and the table has atleast one LOB column. 
     - The table into which data is being imported is apre-existing table and at least one of the following conditions exists: 
        – There is an active trigger 
        – The table is partitioned and does nothave any indexes 
        – Fine-grained access control for INSERTmode is enabled for the table. 
        – An enabled constraint exists (otherthan table check constraints) 
        – A unique index exists 

5.2. Data could also have been loaded in "Direct Path" mode, but theparameters QUERY, or REMAP_DATA were used for the specified table in the ImportData Pump job. 

5.3. Data could also have been loaded in "Direct Path" mode, but thetable or partition is relatively large (> 250 Mb) and parallel SQL can beused to speed up the load even more.

Note that with a load of data in External Tables mode, parallelI/O execuation Processes (PX processes) can be used to load the data inparallel. In that case the Data Pump Worker process acts as the coordinator forthe PX processes. However, this does not apply when the table has a LOB column:in that case the table parallelism will always be 1. See also: 
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN" 

6. How to enforce a specific load/unload method ?

In very specific situations, the undocumented parameterACCESS_METHOD can be used to enforce a specific method to unload or load thedata. Example:

%expdp system/manager ... ACCESS_METHOD=DIRECT_PATH  
%expdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE 

or:

%impdp system/manager ... ACCESS_METHOD=DIRECT_PATH  
%impdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE 

Important Need-To-Know's when the parameter ACCESS_METHOD is specified for a job:

  • The parameter ACCESS_METHOD is an undocumentedparameter andshould only be used when requested by Oracle Support.
  • If the parameter is not specified, then Data Pump will automaticallychoose the best method toload or unload the data.
  • If import Data Pump cannot choose due to conflictingrestrictions, an error will be reported:
    ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."EMP"using client specified AUTOMATIC method
  • The parameter can only be specified when the Data Pump job is initiallystarted (i.e. theparameter cannot be specified when the job is restarted).
  • If the parameter is specified, the method of loading orunloading the data is enforced on all tables that need to be loaded or unloaded withthe job.
  • Enforcing a specific method may result in a slowerperformance ofthe overall Data Pump job, or errors such as:

... 
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA 
ORA-31696: unable to export/importTABLE_DATA:"SCOTT"."MY_TAB" using client specifiedDIRECT_PATH method 
... 

  • To determine which access method is used, a Workertrace file can be created, e.g.:

%expdp system/manager DIRECTORY=my_dir \ 
DUMPFILE=expdp_s.dmp LOGFILE=expdp_s.log \ 
TABLES=scott.my_tab TRACE=400300

The Worker trace file shows the method with which the data wasloaded (or unloaded for Import Data Pump):

... 
KUPW:14:57:14.289: 1: object: TABLE_DATA:"SCOTT"."MY_TAB" 
KUPW:14:57:14.289: 1: TABLE_DATA:"SCOTT"."MY_TAB" externaltable, parallel: 1 
...

For details, see also: 
Note:286496.1 " Export/Import DataPumpParameter TRACE - How to Diagnose Oracle Data Pump"

7. Known issues.

7.1. Bug 4722517 - Materialized view lognot updated after import into existing table 
Defect:  Bug:4722517 "MATERIALIZED VIEW LOG NOTUPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE" 
Symptoms:  amaterialized view is created with FAST REFRESH on a master table; if data isimported into this master table, then these changes (inserts) do not showup in the materialized view log
Releases: 10.1.0.2.0 and higher
Fixed in:  notapplicable, closed as not-a-bug
Patched files:  notapplicable 
Workaround:  ifpossible import into a temporary holding table then copy the data with"insert as select" into the master table
Cause:  a fastrefresh does not apply changes that result from bulk load operations onmasters, such as an INSERT with the APPEND hint used by Import Data Pump
Trace:  notapplicable, changes are not propagated
Remarks:  see also Note:340789.1 "Import Datapump (Direct Path)Does Not Update Materialized View Logs "

7.2. Bug 5599947 - Export Data Pump isslow when table has a LOB column
Defect:  Bug:5599947 "DATAPUMP EXPORT VERY SLOW"
Symptoms:  ExportData Pump has low performance when exporting table with LOB column
Releases:  11.1.0.6and below
Fixed in:  notapplicable, closed as not feasible to fix
Patched files:  notapplicable
Workaround:  ifpossible re-organize the large table with LOB column and make it partitioned
Cause:  if a tablehas a LOB column, and the unload or load takes place in "ExternalTables" mode, then we cannot make use of parallel I/O execution Processes(PX processes)
Trace:  notapplicable
Remarks:  see also Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN"

7.3. Bug 5941030 - Corrupt blocks afterImport Data Pump when table has LONG / LONG RAW column
Defect:  Bug:5941030 "Datapump import can producecorrupt blocks when there is a LONG / LONG RAW" 
Symptoms:  DirectPath import of a LONG / LONG RAW column can create corrupt blocks in thedatabase. If DB_BLOCK_CHECKING is enabled then an ORA-600 [6917] error can besignalled. If not then the corrupt block can cause subsequent problems, likeORA-1498 (block check failure) on an analyze of the table.
Releases:  11.1.0.6and below 
Fixed in: 10.2.0.5.0 and 11.1.0.7.0 and higher; for some platforms a fix on top of10.2.0.2.0 and on top of 10.2.0.3.0 is available with Patch:5941030
Patched files:  kdbl.o
Workaround:  ifpossible use the classic export and import clients to transfer this table
Cause:  internal issue withcolumn count when loading table with LONG/LONG RAW column in Direct Path mode
Trace:  notapplicable 
Remarks:  see also Note:457128.1 "Logical Corruption EncounteredAfter Importing Table With Long Column Using DataPump"

@ 8. For Support: Enhancement Requests.

@ Open Enhancement Requests: 

References

BUG:4722517 - MATERIALIZED VIEW LOG NOT UPDATEDAFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE
BUG:4727162 - PRODUCT ENHANCEMENT: ADD NEWDATAPUMP EXT TAB ACCESS METHOD WITHOUT APPEND HINT
BUG:5599947 - DATAPUMP EXPORT VERY SLOW
BUG:5941030 - DATAPUMP IMPORT CAN CORRUPT DATAWHEN THERE IS A LONG / LONG RAW
BUG:5943346 - PRODUCT ENHANCEMENT: PARALLELISM OFDATAPUMP JOB ON TABLE WITH LOB COLUMN
NOTE:155477.1 - Parameter DIRECT: Conventional PathExport Versus Direct Path Export
NOTE:286496.1 - Export/Import DataPump ParameterTRACE - How to Diagnose Oracle Data Pump
NOTE:340789.1 - Import Datapump (Direct Path) DoesNot Update Materialized View Logs
NOTE:365459.1 - Parallel Capabilities of Oracle DataPump
NOTE:453895.1 - Checklist for Slow Performance ofExport Data Pump (expdp) and Import DataPump (impdp)
NOTE:457128.1 - Logical Corruption Encountered AfterImporting Table With Long Column Using DataPump
NOTE:469439.1 - IMPDP Can Fail with ORA-31696 ifACCESS_METHOD=DIRECT_PATH Is Manually Specified
http://www.oracle.com/technology/pub/notes/technote_pathvsext.html

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值