数据库没有备份无法打开处理方法之二dul

 由于企业的数据库人员经验不足,或是根本没有dba,因此,可能会出现数据瘫痪,数据库不能打开的现象,但是又没有备份,那么我们一般有两种方法:

 a,数据库强行打开;

 b,使用oracle dul工具来把数据抽取出来

一般使用dul都是强行打开数据库都无法打开,或是某个表空间文件无法online的情况下才使用,需要把这个表空间的数据挽救出来。注意此工具需要system表空间是好的。

dul是oracle内部工具,原来没有什么限制,现在有了时间限制为30天,也就是说,如果下载下来使用超过30天,就无法使用了。大家记住这个工具只是为了挽救数据,此工具会把数据库中的表的数据抽取出来,用户中的索引,存储过程,函数,触发器之类的除了表以外的对象不能使用此工具,并且即使表数据抽取出来,表的结构也无法正确抽取出来,比如表结构中一列id number(20) not null,当导入的时候此列只能是id number,无法显示number(20),后面的非空也不会有,因此只能说挽救数据。此工具抽取数据生成两种格式的文件,一个是.dmp文件,可以直接使用imp导入,另外一个是sqlldr文件,只能使用sqlldr命令加载进去。那么其他对象怎么才可以获得到,有两个办法:

1.从开发商那里获得到用户的结构,但是不需要数据。

2.把所有文件,除了system undo以外的其他文件全部offline,然后强行打开。如果这样可以,那么可以把用户结构导出

   exp system/password owner=appuser rows=n

当获得到用户结构后,需要先把空用户导入,也就是除了数据以外的用户的对象导入,然后把外键,触发器disable,然后再把dul导出的数据导入,此时要注意,如果有数据量大的表,必须要先把索引去掉,然后等导入数据后再把索引使用兵法创建的方法创建。

上边的方法最好使用第二种,因为数据库中有sequence,如果使用第一种,有可能sequence在开发商那里很小,但是在生产库很大,我们就遇到过这种情况,如果第二种方法无法实现,只能是用第一种的时候,我们可能就需要认为把sequence的cache手动加大。


    下面摘自oracle文档,对dul进行一个详细说明


 

DUL'sPRINCIPLES and FEATURE LIST

STANDALONEC-PROGRAM

DUL is a standalone C program that directly retrieves rowsfrom tables in data files. The Oracle RDBMS software is NOT used at all. DULdoes dirty reads, it assumes that every transaction is committed. Nor does itcheck/require that media recovery has been done.

LASTRESORT

DUL is intended to retrieve data that cannot be retrievedotherwise. It is NOT an alternative for EXP, SQL*Plus etc. It is meant to be alast resort, not for normal production usage.

Before you use DUL you must be aware that the rdbms hasmany hidden features to force a bad database open. Undocumented init.oraparameters and events can be used to skip roll forward, to disable rollback,disable certain SMON actions, advance the database scn and more.

DATABASECORRUPT - BLOCKS OK

The database can be corrupted, but an individual data blockused must be 100% correct. During all unloading checks are made to make surethat blocks are not corrupted and belong to the correct segment. If during ascan a bad block is encountered, an error message is printed in the loader fileand to standard output. Unloading will continue with the next row or block.

ROWS inCLUSTERS/TABLES/INDEXES

DUL can and will only unload index/table/cluster data. Itwill NOT dump triggers, stored procedures nor create sql scripts for tables orviews. (But the data dictionary tables describing them can be unloaded). Thedata will be unloaded in a format suitable for SQL*Loader or IMP. A matchingcontrol file for SQL*Loader is generated as well.

DUL can unload indices and index organized tables. Indexunload is usefull to determine how many rows a table should have or to identifythe missing rows.

CROSSPLATFORM UNLOADING

Cross-platform unloading is supported. The database can becopied from a different operating system than the DUL-host. (Databases/systemsdone so far: Sequent/ptx, Vax Vms, Alpha Vms, MVS, HP9000/8xx, IBM AIX, SCOUnix, Alpha OSF/1, Intel Windows NT).

The configuration parameters within "init.dul"will have to be modified to match those of the original platform and O/S ratherthan the platform from which the unload is being done.

ROBUST

DUL will not dump, spin or hang no matter how badlycorrupted the database is.

(NEARLY)ALL ORACLE FEATURES SUPPORTED

Full support for all database constructs: row chaining, rowmigration, hash/index clusters, longs, raws, rowids, dates, numbers, multiplefree list groups, segment high water mark, NULLS, trailing NULL columns, andunlimited extents, new block layout of Oracle8, partitioned tables.

Later additions are lobs, compressed indexes, 9ir2compressed tables. Varrays and ADTs (user defined objects) are partly supportedin sql*loader mode.

ASM is fully supported, files can be extracted from an asmdisk group. No mounted ASM instance is used, the disks are accessed directly.Non default asm allocation unit sizes are supported.

Data can be recovered from export dump files with the unexpcommand suite. Some initial work has been done for unpump to support data pumpfiles.

SUPPORTEDRDBMS VERSIONS

DUL should work with all versions starting oracle 6. DULhas been tested with versions from 6.0.26 up to 10.2. Even the old block headerlayout (pre 6.0.27.2) is supported.

MULTIBYTE SUPPORT

DUL itself is essentially a single byte application. The commandparser does not understand multi byte characters, but it is possible to unloadany multi byte database. For all possible caveats there is a work around.

DUL can optionally convert to UTF8. This is for NCLOBS thatare stored in UTF16.

RESTRICTIONS
MLSLABELS

Multi Level Security Lables of trusted oracle are notsupported.

(LONG)RAW

DUL can unload (long) raws. Nowadays there is suitableformat in SQL*Loader to preserve all long raws. So Long raws and blobs can beunloaded in both modes.

ORACLE8OBJECT OPTION AND LOBS

Nested tables are not yet supported, if they are needed letme know and it will be added. Varrays and ADTs are supported, also those thatare stored as a kernel lob. CLOBS, NCLOBS are supported both in SQL*Loader modeand in exp mode. BLOBS are best handled in exp mode, the generated hex formatin SQL*Loader mode is not loaded correctly currently.

PORTABLE

DUL can be ported to any operating system with an ANSI-Ccompiler. DUL has been ported to many UNIX variants, VMS and WindowsNT. Currentlyall builds are done using gcc and a cross compiler environment on Linux

RDBMSINTERNALS

A good knowledge of the Oracle RDBMS internals is a prerequisite to be able to use DUL successfully. For instance the Data ServerInternals (DSI) courses give a good foundation. There is even amodulededicated to DUL

 

SETTINGUP and USING DUL

CONFIGURATIONFILES

There are two configuration files for DUL."init.dul" contains all configuration parameters. (size of caches,details of header layout, oracle block size, output file format) In the controlfile, "control.dul", the database data file names and the asm diskscan be specified.

DATA DICTIONARY AVAILABLE

The Oracle data dictionary is available if the data fileswhich made up the SYSTEM TableSpace are available and useable. The number whichOracle assigned to these files and the name you have given them, which does nothave to be the original name which Oracle knew, must be included in the"control.dul" file. You also need to eventually include the filenumbers and names of any files from other TableSpaces for which you wish toeventually unload TABLES and their data. The lack of inclusion of thesefiles will not affect the data dictionary unload step but it will affect laterTABLE unloading.

USINGDUL WHEN USER$, OBJ$, TAB$ and COL$ CAN BE  UNLOADED

Steps to follow:

  1. configure DUL for the target database. This means creating a correct init.dul and control.dul. The SYSTEM TableSpace's data file numbers and names must be included within the control.dul file along with any data files for TableSpaces from which you wish to unload TABLEs and their data. For Oracle8 and higher the tablespace number and the relative file number must be specified for each datafile.
  2. Use the " BOOTSTRAP; " command to prepare for unloading. The bootstrap process will find a compatibility segment, find the bootstrap$ table unload The old " dul dictv7.ddl"re no longer needed.
  3. Unload the tables for which data files have been included within the "control.dul" file. Use one of the following commands:
    • "UNLOAD TABLE [ owner>.]table ; (do not forget the semicolon)
      • This will unload the one table definition and the table's data.
    • "UNLOAD USER user name ;
      • This unloads all tables and data for the specified user.
    • "UNLOAD DATABASE ;
      • This unloads all of the database tables available. (except the user SYS).
NO DATA DICTIONARY AVAILABLE

If data files are not available for the SYSTEM TableSpacethe unload can still continue but USER, TABLE and COLUM names will not beknown. Identifying the tables can be an overwhelming task. But it can be (andhas been) done. You need in depth knowledge about your application and theapplication tables. Column types can be guessed by DUL, but table and columnnames are lost. Any old SYSTEM tablespace from the same database but weeks oldcan be of great help!. Most of the information that DUL uses does not change.(only the dataobj# is during truncate or index rebuild)

USINGDUL WITHOUT SYSTEM TABLESPACE

Steps to follow:

  1. configure DUL for the target database. This means creating a correct init.dul and control.dul. (SeePort specific parameters ). In this case control.dul file will need the numbers and names of datafiles from which TABLEs and data will be unloaded but it does not require the SYSTEM TableSpace's information.
  2. SCAN DATABASE; : scan the database, build extent and segment map
  3. SCAN TABLES; or SCAN EXTENTS; : gather row statistics
  4. Identify the lost tables from the output of step 3.
  5. UNLOAD the identified tables.
AUTOMATEDSEARCH

To ease the hunt for the lost tables: the scannedstatistical information in seen_tab.dat and seen_col.dat can be loaded into afresh database. If you recreate the tables ( Hopefully the create table scriptsare still available) then structure information of a "lost" table canbe matched to the "seen" tables scanned information with two SQL*Plusscripts. (fill.sql and getlost.sql).

HINTSAND PITFALLS
  • Names are not really relevant for DUL, only for the person who must load the data. But the unloaded data does not have any value, if you do not know from which table it came.
  • The guessed column types can be wrong. Even though the algorithm is conservative and decides UNKNOWN if not sure.
  • Trailing NULL columns are not stored in the database. So if the last columns only contain NULL's than the scanner will NOT find them. (During unload trailing NULL columns are handled correctly).
  • When a table is dropped, the description is removed from the data dictionary only. The data blocks are not overwritten unless they are reused for a new segment. So the scanner software can see a table that has been dropped.
  • Tables without rows will go unnoticed.
  • Newer objects have a higher object id than older objects. If an table is recreated, or if there is a test and a production version of the same table the object id can be used to decide.

DDL(DUL Description Language) UNLOAD STATEMENT OVERVIEW

DUL uses an SQL like command interface. There are DDLstatements to unload extents, tables, users or the entire database. Datadictionary information required can be specified in the ddl statements or takenfrom the previously unloaded data dictionary. The following three statementswill unload the DEPT table. The most common form is if the data dictionary andthe extent map are available:

 
         UNLOAD TABLE scott.dept;

All relevant information can be specified in the statementas well:

 
        REM Columns with type in the correct order
        REM The segment header loaction in the storage clause
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));
 

Oracle version 6:

 
        REM version 6 data blocks have segment header location in each block
        ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));

Oracle7:

 
        REM Oracle7 data blocks have object id in each block
 
        ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( OBJNO 1501 );

DUL'S OUTPUT FORMAT.

Only complete good rows are written to the output file. Forthis each row is buffered. The size of the buffer can changed with the init.dulparameter BUFFER. There is no speed gained with a high BUFFER parameter, itshould just be big enough to hold a complete row. Incomplete or bad rows are notwritten out. The FILE_SIZE_IN_MB init.dul parameter can be used to split theoutput (at a proper boundary) into multiple files, each file can be loadedindividually.

There are three different modes of output format.

  • Export mode
  • SQL*Loader mode: stream data files
  • SQL*Loader mode: Fixed physical record data files
EXPORT MODE

The generated file is completely different from a tablemode export generated by EXP! The file is the minimal format that IMP can load.For each table a separate IMP loadable file will be generated. It is a singletable dump file. It contains a header an insert table statement and the tabledata. Table grants, storage clauses, or triggers will not be included. Anminimal create table statement is included (no storage clause just column namesand types without precision). The character set indication in the file in thegenerated header is V6 style. It is set to mean ASCII based characterset.

To enable export mode, set the init.dul parameterEXPORT_MODE to TRUE.

As the generated pseudo dump file does not containcharacter set information set NLS_LANG to match that of the original database.In export mode no character set conversion is done.

SQL*LOADER MODES

The data in the is either not converted at all, oreverthing is converted to UTF8 if LDR_OUTPUT_IN_UTF8 is set. This setting isrequired in mixed character set environments as the contents of a data filemust have a single character set.

When loading the data you probably need to set NLS_LANG tomatch that of the original database to prevent unwanted character setconversion.

For both SQL*Loader output formats the columns will bespace separated and enclosed in double quotes. Any double quote in the datawill be doubled. SQL*Loader recognizes this and will load only one. The characterused to enclose the columns can be changed from double quote to any characteryou like with the init.dul parameter LDR_ENCLOSE_CHAR.

There are two styles of physical record organization:

StreamMode

Nothing special is done in stream mode, a newline isprinted after each record. This is a compact format and can be used if the datadoes not contain newline characters. To enable stream mode setLDR_PHYS_REC_SIZE = 0 in init.dul.

FixedPhysical Records

This mode is essential if the data can contain newlines.One logical record, one comlete row, can be composed of multiple physicalrecords. The default is record length is 81, this fits nicely on the screen ofa VT220. The physical record size can be specified with LDR_PHYS_REC_SIZE ininit.dul.

OUTPUT FILE NAMES

The file names generated are: owner name_table name.ext.The extension is ".dmp" for IMP loadable files. ".dat" and".ctl" are used for the SQL*Loader datafile and the control file. Toprevent variable substitution and other unwanted side effects, strangecharacters are stripped.(Only alpha numeric and '_' are allowed).

If the FILE parameter is set the generated names will beFILEnnn.ext. This possibility is a work around if the file system does notsupport long enough file names. (Old windows with 6.3 filename format)

SOMEDUL INTERNALS

REQUIREDINFORMATION

To unload table data from a database block the followinginformation must be known:

  1. Column/Cluster Information: The number and type of the columns. For char or varchar columns the maximum length as well. The number of cluster columns and the table number in the cluster. This information can be supplied in the unload statement or it can be taken from the previously unloaded USER$, OBJ$, TAB$ and COL$.
  2. Segment/Extent information: When unloading a table the extent table in the data segment header block is used to locate all data blocks. The location of this segment header block (file number and block number) is taken from the data dictionary or can be specified in the unload statement. If the segment header is not correct/available then another method must be used. DUL can build its own extent map by scanning the whole database. (in a separate run of DUL with the scan database statement.)
BINARYHEADERS

C-Structs in block headers are not copied directly, theyare retrieved with specialized functions. All offsets of structure members areprogrammed into DUL. This approach makes it possible to cross-unload. (Unloadan MVS created data file on an HP) Apart from byte order only four layout typeshave been found so far.

  1. Vax VMS and Netware : No alignment padding between structure members.
  2. Korean Ticom Unix machines : 16 bit alignment of structure members.
  3. MS/DOS 16 bit alignment and 16 bit wordsize.
  4. Rest of the world (Including Alpha VMS) structure member alignment on member size.
MACHINEDEPENDENCIES

Machine dependencies (of the database) are configurablewith parameters:

  • Order of bytes in a word (big/little endian).
  • Number of bits for the low part of the FILE# in a DBA (Block Address).
  • Alignment of members in a C-struct.
  • Number of blocks or bytes before the oracle file header block.
  • Size of a word used in the segment header structure.
UNLOADINGTHE DATA DICTIONARY

DUL can use the data dictionary of the database to beunloaded if the files for it exist and are uncorrupted. For the data dictionaryto be used, internal tables must be unloaded first to external files: (USER$,OBJ$, TAB$ and COL$). The bootstrap command will find and unload the requiredtables.

DDL( DUL DESCRIPTION LANGUAGE ) SPECIFICATION

 
[ ALTER SESSION ] SET init.dul parameter =  value ;
     Most parameters can be changed on the fly. 
 
BOOTSTRAP [LOCATE | GENERATE | COMPLETE
           | UNLOAD   Bootstrap$ segment header block address ];
     Bootstraps the data dictionary. Default is COMPLETE.
     LOCATE finds and unloads the bootstrap$ table.
     GENERATE builds a ddl file based on inforation in the cache.
     COMPLETE is in fact LOCATE, followed by GENERATE (two times)
 
COMMIT;
     Writes the changed block to the data file.
 
CREATE BLOCK INDEX  index_name  ON  device ;

A block index contains address of valid oracle blocks foundin a corrupt file system. Useful to merge multiple disk images or to unloadfrom corrupted file systems. This is only useful in extreme file systemcorruption scenarios.

 
DESCRIBE  owner_name  . table_name ;
 
DUMP [ TABLESPACE  tablespace_no ]
     [ FILE  file_no  ]
     [ BLOCK  block_no  ]
     [ LEVEL  level_no  ] ;
     Not a complete blockdump, mainly used for debugging.
     The block address is remembered.
 
EXTRACT  asm file name  to  output file name  ;
     Copies any ASM file from a disk group to the file system.
     (there was a problem with online redologs this needs more testing)
 
MERGE block_index INTO [  segment  ];

The merge command uses the information in the index file tolocate possible data blocks it looks for a combination of file numbers andobject id, each candidate block is compared to the current block in thedatafile. If the current block is bad, or has an older scn the candidate willwill be written into the datafile. This is only useful in extreme file systemcorruption scenarios.

 
REM any_text_you_like_till_End_Of_Line : comment
REM  NOT allowed inside ddl statements. ( To avoid a two layer lexical scan).
 
ROLLBACK; # Cancels the UPDATE statements.
 
SHOW     DBA  dba ;                # dba -> file_no block_no calculator
           | DBA  rfile_no block_no ;  # file_no block_no -> dba calculator
           | SIZES ;                   # show some size of important structs
           | PARAMETER;                # shows the values of all parameters
           | LOBINFO;                  # lob indexes found with SCAN DATABASE
       | DATAFILES;                # summary of configured datafiles
       | ASM DISKS;                # summary of configured asm disks
       | ASM FILES;                # summary of configured datafiles on asm
       | ASM FILE  cid      # extent information for asm file
 
UNEXP [TABLE] [  owner  . ]  table name 
       (  column list  ) [ DIRECT ]
       DUMP FILE  dump file name
       FROM  begin offset  [ UNTIL  end offset  ]
       [ MINIMUM  minimal number of columns  COLUMNS ] ;
 
       To unload data from a corrupted exp dump file. No special setup
       or configuration is required, just the compatible parameter.
       The start offset should be where a row actually begins.
 
UNPUMP
       To unload data from a corrupted expdp (datapump) dump file.
       This is still work in progress, the basic commands work
       but rather complex to use. Contact me if this is needed.
 
UNLOAD DATABASE;
 
UNLOAD USER user_name;
 
UNLOAD [TABLE]  [  schema_name . ]  table_name 
              [ PARTITION(  partition_name ) ]
              [ SUBPARTITION(  sub_partition_name ) ]
              [ (  column_definitions ) ]
              [  cluster_clause  ]
              [  storage_clause  ] ;
 
UNLOAD EXTENT  table_name 
              [ (  column_definitions  ) ]
              [ TABLESPACE  tablespace_no  ] 
              FILE  extent_start_file_number
              BLOCK extent_start_block_number 
              BLOCKS  extent_size_in oracle_blocks ;
 
UNLOAD LOB SEGMENT FOR [  schema_name . ]  table_name   [ (  column name  ) ] ;
 
UNLOAD LOB SEGMENT STORAGE ( SEGOBJNO data obj#) ;
 
UPDATE [ block_address ] SET UB1|UB2|UB4 @ offset_in_block = new_value ;
UPDATE [ block_address ] SET  block element name  = new_value ;
        Now and then we can repair something.
        Patches the current block and dumps it.
        You can issue multiple UPDATE commands.
        Block is not written yet, use COMMIT to write. 
 
storage_clause ::=
        STORAGE ( storage_specification  [ more_storage_specs ] )
 
storage_specification ::=
        OBJNO object_id_number 
|       TABNO cluster_table_number
|       SEGOBJNO cluster/data_object_number       /* v7/v8 style data block id */
|       FILE  data_segment_header_file_number     /* v6 style data block id */
        BLOCK data_segment_header_block_number )   
|       any_normal_storage_specification_but_silently_ignored
 
SCAN DATABASE;

Scans all blocks of all data files. Two or three files aregenerated:

  1. SEG.dat information of found segment headers (index/cluster/table): (object id, file number, and block number).
  2. EXT.dat information of contiguous table/cluster data blocks. (object id(V7), file and block number of segment header (V6), file number and block number of first block, number of blocks, number of tables)
  3. SCANNEDLOBPAGE.dat information for each lob datablock, this file (optional, only if init.dul:SCAN_DATABASE_SCANS_LOB_SEGMENTS=TRUE) can possibly be huge. Also the required memory size can be problematic. The purpose is twofold: 1: to possibly work around corrupt lob indexes during unload table. 2: unload lob segments (for deleted lobs or lob segments without lob index or parent table) Meaning of the fields in SCANNEDLOBPAGE.dat: (segobj#, lobid, fat_page_no, version( wrap, base), ts#, file#, block#)
 
SCAN DUMP FILE  dump file name
        [ FROM  begin offset  ]
        [ UNTIL  end offset  ];
 
        Scans an  export dump file to produce to provide the
        create/insert statements and the offsets in the dump file.
 
SCAN LOB SEGMENT    storage clause ;
SCAN LOB SEGMENT FOR  table name  [.  column name] ;
        Scans the lob segment to produce LOBPAGE.dat information,
        but then for this segment only. Probably quicker and
        smaller. For partitioned objects use scan database.
 
SCAN TABLES;
        Uses SEG.dat and EXT.dat as input.
        Scans all tables in all data segments (a header block and at least one
        matching extent with at least 1 table).
 
SCAN EXTENTS;
        Uses SEG.dat and EXT.dat as input.
        All extents for which no corresponding segment header has been found.
        (Only useful if a tablespace is not complete, or a segment header
        is corrupt).
 
EXIT QUIT and EOF all cause DUL to terminate.
 

DDL( DUL DESCRIPTION LANGUAGE ) DESCRIPTION

Rulesfor UNLOAD EXTENT and UNLOAD TABLE:
ExtentMap

UNLOAD TABLE requires an extent map. In 99.99% of the casesthe extent map in the segment header is available. In the rare 0.01% that thesegment header is lost an extent map can be build with the scan databasecommand. The self build extent map will ONLY be used during an unload if theparameter USE_SCANNED_EXTENT_MAP is set to TRUE.

All data blocks have some ID of the segment they belong to.But there is a fundamental difference between V6 and V7. Data blocks created byOracle version 6 have the address of the segment header block. Data blockscreated by Oracle7 have the segment object id in the header.

ColumnSpecification

The column definitions must be specified in the order thecolumns are stored in the segment, that is ordered by col$.segcol#. This is notnecessarily the same order as the columns where specified in the create tablestatement. Cluster columns are moved to the front, longs to the end. Columnsadded to the table with alter table command, are always stored last.

Unloadinga single extent

UNLOAD EXTENT can be used to unload 1 or more adjacentblocks. The extent to be unloaded must be specified with the STORAGE clause: Tospecify a single extent use: STORAGE ( EXTENTS( FILEfno BLOCK bnoBLOCKS #blocks) ) (FILE and BLOCK specify the first block, BLOCKS thesize of the extent)

DULspecific column types

There are two extra DUL specific data types:

  1. IGNORE: the column will be skipped as if it was not there at all.
  2. UNKNOWN: a heuristic guess will be made for each column.

In SQL*Loader mode there are even more DUL specific datatypes:

  1. HEXRAW: column is HEX dumped.
  2. LOBINFO: show some information from LOB locators .
  3. BINARY NUMBER: Machine word as used in a LOB index.
IdentifyingUSER$, OBJ$, TAB$ and COL$

DUL uses the same bootstrap procedure as the rdbms. That isit uses the root dba from the system datafile header to locate the bootstrap$table. Depending on the version this root dba is either the location of thecompatibility segment containing the bootstrap$ address or for the newerversions the address of the bootstrap$ table itself. The bootstrap$ table isunloaded and its contents is parsed to find the first four tables (USER$, OBJ$,TAB$ and COL$). The other tables are unloaded based on information in thesefirst four.

DESCRIPTIONOF SCAN COMMANDS

SCAN TABLES and SCAN EXTENTS scan for the same informationand produce similar output. ALL columns of ALL rows are inspected. For eachcolumn the following statistics are gathered:

  • How often the column is seen in a data block.
  • The maximum internal column length.
  • How often the column IS NULL.
  • How often the column consists of at least 75% printable ascii.
  • How often the column consists of 100% printable ascii.
  • How often the column is a valid oracle number.
  • How often the column is a nice number. (not many leading or trailing zero's)
  • How often the column is a valid date.
  • How often the column is a possible valid rowid.

These statistics are combined and a column type issuggested. Using this suggestion five rows are unloaded to show the result.These statistics are dumped to two files (seen_tab.dat and seen_col.dat). Thereare SQL*Loader and SQL*Plus scripts available to automate a part of theidentification process. (Currently known as the getlost option).

DESCRIBE

There is a describe command. It will show the dictionaryinformation for the table, available in DUL's dictionary cache.

DULSTARTUP SEQUENCE

During startup DUL goes through the following steps:

  • the parameter file "init.dul" is processed.
  • the DUL control file (default "control.dul") is scanned.
  • Try to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL's data dictionary cache.
  • Try to load seg.dat and col.dat.
  • Accept DDL-statements or run the DDL script specified as first arg.

DULparameters to be specified in init.dul:

ALLOW_TRAILER_MISMATCH

BOOLEAN

Strongly discouraged to use,will seldom produce more rows. Use only if you fully understand what it meansand why you want it. skips the check for correct block trailer. The blocksfailing this test are split of corrupt. But it saves you the trouble to patchsome blocks.

ALLOW_DBA_MISMATCH

BOOLEAN

Strongly discouraged to use,will seldom produce more rows. Use only if you fully understand what it meansand why you want it. Skips the check for correct block address. The blocksfailing this test are probably corrupt. But it saves you the trouble to patchsome blocks.

ALLOW_OTHER_OBJNO

BOOLEAN

If your dictionary is olderthan your datafiles then the data object id's can differ for truncated tables.With this parameter set to true it will issue a warning but use the value fromsegment header. All other blocks are fully checked. This is for special casesonly.

ASCII2EBCDIC

BOOLEAN

Must (var)char fields betranslated from EBCDIC to ASCII. (For unloading MVS database on a ASCII host)

BUFFER

NUMBER (bytes)

row output buffer size usedin both export and SQL*Loader mode. In each row is first stored in this buffer.Only complete rows without errors are written to the output file.

COMPATIBLE

NUMBER

Database version , validvalues are 6, 7, 8 or 9. This parameter must be specified

CONTROL_FILE

TEXT

Name of the DUL control file(default: "control.dul").

DB_BLOCK_SIZE

NUMBER

Oracle block size in bytes(Maximum 32 K)

DC_COLUMNS

NUMBER

DC_OBJECTS

NUMBER

DC_TABLES

NUMBER

DC_USERS

NUMBER

Sizes of dul dictionarycaches. If one of these is too low the cache will be automatically resized.

EXPORT_MODE

BOOLEAN

EXPort like output mode orSQL*Loader format

FILE

TEXT

Base for (dump or data) filename generation. Use this on 8.3 DOS like file systems

FILE_SIZE_IN_MB

NUMBER (Megabytes)

Maximum dump file size. Dumpfiles are split into multiple parts. Each file has a complete header and can beloaded individually.

LDR_ENCLOSE_CHAR

TEXT

The character to enclosefields in SQL*Loader mode.

LDR_PHYS_REC_SIZE

NUMBER

Physical record size for thegenerated loader datafile.

LDR_PHYS_REC_SIZE = 0 Nofixed records, each record is terminated with a newline.

LDR_PHYS_REC_SIZE > 2:Fixed record size.

MAX_OPEN_FILES

Maximum # of database filesthat are concurrently kept open at the OS level.

OSD_BIG_ENDIAN_FLAG

Byte order in machine word.Big Endian is also known as MSB first. DUL sets the default according to themachine it is running on. For an explanation why this is called Big Endian, youshould read Gullivers Travels.

OSD_DBA_FILE_BITS

File Number Size in DBA inbits. Or to be more precise the size of the low order part of the file number.

OSD_FILE_LEADER_SIZE

bytes/blocks added before thereal oracle file header block

OSD_C_STRUCT_ALIGNMENT

C Structure member alignment(0,16 or 32). The default of 32 is correct for most ports.

OSD_WORD_SIZE

Size of a machine word always32, except for MS/DOS(16)

PARSE_HEX_ESCAPES

Boolean default FALSE

Use \\xhh hex escapesequences in strings while parsing. If set to true then strange characters canbe specified using escape sequences. This feature is also for specifyingmulti-byte characters.

USE_SCANNED_EXTENT_MAP

BOOLEAN

Use the scanned extent map inext.dat when unloading a table. The normal algorithme uses the extent map inthe segment header. This parameter is only useful if some segment headers aremissing or incorrect.

WARN_RECREATE_FILES

BOOLEAN (TRUE)

Set to FALSE to suppress thewarning message if an existing file is overwritten.

WRITABLE_DATAFILES

BOOLEAN (FALSE)

Normal use of DUL will onlyread the database files. However the UPDATE and the SCAN RAW DEVICE will writeas well. The parameter is there to prevent accidental damage.

SAMPLE init.dul :

 
# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40
 
# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 10
osd_c_struct_alignment = 32
osd_file_leader_size = 1
 
# database parameters
db_block_size = 8k
 
# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81

Configuringthe port dependent parameters

Starting from rdbms version 10G osd parameters are easy toconfigure. Typically all parameters can be used at their defaults. The only onethat might need attention is osd_big_endian_flag, when doing a cross platformunload, where the original database platform is different from the currentmachine. If osd_big_endian_flag is set incorrectly, it is detected at startup,when doing file header inspection.

Collectionof known Parameters

For pre 10G databases there is a list of known parametersin the osd wiki page listof osd (Operating System Dependend) parameters for almost every platform.If your platform is not in the list you can use the suggestions below todetermine the parameters. (And then please inform me so I can add them to thelist.)

osd_big_endian_flag

big endian or little endian (byte order in machine words):HP, SUN and mainframes are generally big endian: OSD_BIG_ENDIAN_FLAG = TRUE.DEC and Intel platforms are little endian: OSD_BIG_ENDIAN_FLAG = FALSE. Thedefault is correct for the platform where DUL is running on.

There is no standard trick for this, the following mightwork on a unix system:

  echo dul | od -x
  If the output is like:
     0000000 6475 6c0a
     0000004
  You are on a big endian machine (OSD_BIG_ENDIAN_FLAG=TRUE).
 
  If you see:
     0000000 7564 0a6c
     0000004
  This is a little endian machine (OSD_BIG_ENDIAN_FLAG=FALSE).
osd_dba_file_bits

The number of bits in a dba used for the low order part offile number. Perform the following query:

 
  SQL> select dump(chartorowid('0.0.1')) from dual;
  
  Typ=69 Len=6: 8,0,0,0,0,0    ->       osd_dba_file_bits =  5 (SCO)
  Typ=69 Len=6: 4,0,0,0,0,0    ->       osd_dba_file_bits =  6 (Sequent , HP)
  Typ=69 Len=6: 1,0,0,0,0,0    ->       osd_dba_file_bits =  8 (NCR,AIX)
  Typ=69 Len=6: 0,16,0,0,0,0   ->       osd_dba_file_bits = 12 (MVS)
  Typ=69 Len=10: 0,0,0,0,0,64,0,0,0,0      osd_dba_file_bits = 10 (Oracle8)       
OSD_C_STRUCT_ALIGNMENT

Structure layout in data file headers. 0: No paddingbetween members in a C-struct (VAX/VMS only) 16: Some korean ticom machines andMS/DOS 32: Structure members are member size aligned. (All others includingALPHA/VMS) Check the following query:

 
SELECT * FROM v$type_size
WHERE type IN ( 'KCBH', 'KTNO', 'KCBH', 'KTBBH', 'KTBIT', 'KDBH'
              , 'KTECT', 'KTETB', 'KTSHC') ;

In general osd_c_struct_alignment = 32 and the followingoutput is expected:

 
K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              24
KTB      KTBBH    TRANSACTION FIXED HEADER                 48
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           44
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            8
 
8 rows selected.

For VAX/VMS and Netware ONLY osd_c_struct_alignment = 0 andthis output is expected:

 
COMPONEN TYPE     DESCRIPTION                      SIZE
-------- -------- -------------------------------- ----------
K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              23
KTB      KTBBH    TRANSACTION FIXED HEADER                 42
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           39
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            7
 
8 rows selected.

If there is a different list this will require some majorhacking and sniffing and possibly a major change to DUL. (EmailBernard.van.Duijnen@oracle.com)

osd_file_leader_size

Number of blocks/bytes before the oracle file header. Unixdatafiles have an extra leading block ( file size, block size magic number) Alarge number ( > 100) is seen as a byte offset, a small number is seen as anumber of oracle blocks.

 
Unix    :       osd_file_leader_size = 1
Vms     :       osd_file_leader_size = 0
Desktop :       osd_file_leader_size = 1 (or 512 for old personal oracle)
Others  :       Unknown ( Use Andre Bakker's famous PATCH utility to find out)
                An Oracle7 file header block starts with the pattern 0X0B010000.

You can add an additional byte offset in control.dul in theoptional third field (for instance for AIX or DEC UNIX data files on rawdevice)

Controlfile syntax specification

A control file (default name "control.dul") isused to specify asm disks, block indexes and the data file names. The format ofthe control has been extended

Currently there are three types of specifications in theDUL control file. Each entry on a separate line. The asm disks must precede theasm files.

 
  control_file_line ::= asm_disk_spec | file_piece_spec | block_index_spec

If COMPATIBLE is 10 or higher you can also specify asmdisks. Its generally sufficent to specify the device name. All properties areautomatically retrieved by header inspection. The full syntax is only neededwhen header inspection is impossible, that is for disks with corrupt headers.The syntax is:

 
DISK  device name [  disk group options  ]
 
 disk group option  ::= GROUP  disk group name 
                           | DISK_NO  disk number in group 
                           | F1B1  File1 Block1 location

A block index is a way to access oracle blocks on corruptfile systems. In general a corrupt file system is not wiped out, its not empty.Due to the specific layout of oracle blocks it is possible to datablocks anstore their location in the block index. See also the create block indexcommand . A block_index_name is a normal identifier, it is used toconstruct an unique file name.

 
BLOCK INDEX  block_index_name
 

 Each entry can contain a part of a datafile.

The smallest unitis a single data block.

This way it ispossible to split datafiles that are too big for DUL

in parts whereeach part is smaller than 2GB.

 
 

 In general it is sufficient to specify thefile name.

Even for a singleblock.

If compatible is10 or higher the file numbers and the tablespace

numbers will beread from the file header.

 
 

If the specifieddetails are different from the file header

DUL will give awarning but use your specification.

This is  to be able to unload files with a corruptedheader block.

For debugging itis possible to dump the file header. 

 
 

 The optional extra leader offset is an extrabyte offset,

that will beadded to all lseek() operations for that datafile.

This makes itpossible to skip over the extra 4k block for some

AIX raw devices,or the extra 64K on Tru64 on raw devices

 
 
 
  file_piece_spec ::= 
         [ [ tablespace_no ] relative_file_number]data_file_name
         [ optional extra leader offset ]
         [ startblock block_no ]
         [ endblock block_no ]
 
 
 
Examples
 
 
 
   # AIX version 7 example with one file on raw device
   1 /usr/oracle/dbs/system.dbf
   8 /dev/rdsk/data.dbf 4096
 
 
 
   # Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
   0  1 /fs1/oradata/PMS/system.dbf
   1  2 /tmp/huge_file_part1 startblock 1 endblock 1000000
   1  2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
   1  2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000
 
 
 
# ASM disks for two disk groups
disk /media/maxtor/asm/dgn1
disk /media/maxtor/asm/dgn2
disk /media/maxtor/asm/dgn3
disk /media/maxtor/asm/dgn4
disk /media/maxtor/asm/dgodd
 
# system datafile in the first asm disk group
+DGN/db102/datafile/system.257.621616979
 
# users datafile in a different disk group
+DGODD/db102/datafile/users.257.621616683
 
# a so called big file tablespace, use 1024 for the file#
 8 1024 /home/oracle/v102/dbs/bigfilets
 
# Or let DUL find out itself from the header
 /home/oracle/v102/dbs/bigfilets
 
# one tablespace with a different block size
/home/oracle/v102/dbs/ts16k.dbf block_size 16k
 
# or let DUL find out by header inspection
/home/oracle/v102/dbs/ts16k.dbf
 
 
 
 

Sample unload session: datadictionary usable for DUL

 
 
 
1.  create a suitable "init.dul" 
 
 
2.  create a control.dul 
 
 
    sqlplus /nolog
    connect / as sysdba
    startup mount
    set trimspool on pagesize 0 linesize 256 feedback off
    column name format a200
    spool control.dul
    select ts#, rfile#, name from v$datafile;
    exit
edit the result
 
 
 
For Oracle8 a different query must be used:
   select ts#, rfile#, name from v$datafile;
 
3.   start DUL and bootstrap;
 
 
$ dul
 
Data UnLoader 10.2.1.16 - Oracle Internal Only - on Thu Jun 28 11:37:24 2007
with 64-bit io functions
 
Copyright (c) 1994 2007 Bernard van Duijnen All rights reserved.
 
 Strictly Oracle Internal use Only
DUL> bootstrap;
Probing file = 1, block = 377
. unloading table                BOOTSTRAP$      57 rows unloaded
DUL: Warning: Dictionary cache DC_BOOTSTRAP is empty
Reading BOOTSTRAP.dat 57 entries loaded
Parsing Bootstrap$ contents
DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 10
 OBJ$: segobjno 18, file 1
 TAB$: segobjno 2, tabno 1, file 1
 COL$: segobjno 2, tabno 5, file 1
 USER$: segobjno 10, tabno 1, file 1
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$   52275 rows unloaded
. unloading table                      TAB$    1943 rows unloaded
. unloading table                      COL$   59310 rows unloaded
. unloading table                     USER$      70 rows unloaded
Reading USER.dat 70 entries loaded
Reading OBJ.dat
 52275 entries loaded and sorted 52275 entries
Reading TAB.dat 1943 entries loaded
Reading COL.dat 59310 entries loaded and sorted 59310 entries
Reading BOOTSTRAP.dat 57 entries loaded
...
Some more messages for all the other TABLES
...
Database character set is WE8ISO8859P1
Database national character set is AL16UTF16
DUL> unload user SCOTT;
About to unload SCOTT's tables ...
. unloading table                       EMP      14 rows unloaded
 
 
 
 
 

Example unload session: datadictionary UNUSABLE for DUL

 
 
 
1.  create a suitable "init.dul" (See config guide) 
 
2.  create a control.dul See above 
 
3.  scan the database for segment headers and extents: 
 
 
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:10:16 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL> scan database;
data file 1 20480 blocks scanned
data file 4 7680 blocks scanned
data file 5 512 blocks scanned
DUL>quit
 
 
4.  Restart DUL and scan the found tables for column statistics this creates
5.  a huge amount of output: 
 
 
 
echo scan tables \; | dul > scan.out&
 
[ many lines here]
 
 
Object id 1601 table number 0
UNLOAD TABLE T1601_0 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER, C5 DATE
        , C6 NUMBER, C7 NUMBER, C8 NUMBER )
    STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
 
Colno  Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
    1    14        3    0%   0%   0% 100% 100%   0%   0%
    2    14        6    0% 100% 100% 100%  14%   0%  21%
    3    14        9    0% 100% 100% 100%  14%   0%   0%
    4    14        3    7%   0%   0% 100% 100%   0%   0%
    5    14        7    0%   0%   0%   0%   0% 100%   0%
    6    14        3    0%   0%   0% 100% 100%   0%   0%
    7    14        2   71%   0%   0% 100% 100%   0%   0%
    8    14        2    0%   0%   0% 100% 100%   0%   0%
 
"7369" "SMITH" "CLERK" "7902" "17-DEC-1980 AD 00:00:00" "800" "" "20"
 
"7499" "-0.000025253223" "SALESMAN" "7698" "20-FEB-1981 AD 00:00:00" "1600" "30+
 
0" "30"
 
"7521" "WARD" "SALESMAN" "7698" "22-FEB-1981 AD 00:00:00" "1250" "500" "30"
 
"7566" "JONES" "MANAGER" "7839" "02-APR-1981 AD 00:00:00" "2975" "" "20"
 
"7654" "MARTIN" "SALESMAN" "7698" "28-SEP-1981 AD 00:00:00" "1250" "1400" "30"
 
[ many more lines here ]
 
 

This looksfamiliar, use the above information and your knowledge of

the emp table tocompose:

 
 
UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
        hiredate date, sal number, comm number deptno number)
 STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
 
 
6.  use this statement to unload emp: 
 
 
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:46:33 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Loaded 350 segments
Loaded 204 extents
Extent map sorted
DUL> UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
DUL 2> hiredate date, sal number, comm number deptno number)
DUL 3> STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
. unloading table                       EMP      14 rows unloaded
DUL>quit
 
 
 
 

Example unload session: Incorrectinit.dul Parameters

 
 
WRONG osd_dba_file_bits size
 
 

This can generateoutput similar to below. Normally this should not

happen since youshould create a demo database and check this via the DUL

documented (inhtml page) query.

 

 The mismatch in DBA's is only in the filenumber (first number in brackets) part.

The secondnumber, the block number, is correct.

 

 
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:40:33 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [4][2]
DUL: Warning: Bad cache layer header file#=1, block#=2
 
DUL: Warning: Block[1][3] DBA in block mismatch [4][3]
DUL: Warning: Bad cache layer header file#=1, block#=3
 
...........and etc..........
 
 
WRONG osd_file_leader_size
 
 

 

This may createoutput similar to below, but many other flavours are possible. In this case weare a fixed number of

blocks off. Thefile number is correct. The difference in the block numbers is constant.:

 
 
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:44:23 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
 
DUL: Warning: Block[1][2] DBA in block mismatch [1][3]
DUL: Warning: Bad cache layer header file#=1, block#=2
 
DUL: Warning: Block[1][3] DBA in block mismatch [1][4]
DUL: Warning: Bad cache layer header file#=1, block#=3
 
...........and etc..........
 
 
 
WRONG osd_c_struct_alignment
 
 

This may generateoutput similar to the following:

 
 
 
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:46:10 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
. unloading table OBJ$
 
DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262145
OS error 2: No such file or directory
 
DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262146
OS error 2: No such file or directory
 
...........and etc..........
 
WRONG db_block_size
 
 

The followingoutput was generated when the db_block_size was set too

small. Thecorrect value was 4096 and it was set to 2048. Normally, the

value for thisparameter should be taken from the Oracle instances's init.ora

file and will notbe correctly set.

 
 
 
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Thu Sep 4 12:38:25 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [513][1159680]
DUL: Warning: File=1, block 2: illegal block version 2
DUL: Warning: Block[1][2] Illegal block type[0]
DUL: Warning: Bad cache layer header file#=1, block#=2
 
DUL: Warning: Block[1][4] DBA in block mismatch [1][2]
DUL: Warning: File[1]Block[4]INCSEQ mismatch[90268!=0]
DUL: Warning: Bad cache layer header file#=1, block#=4
 
DUL: Warning: Block[1][6] DBA in block mismatch [1][3]
DUL: Warning: File[1]Block[6]INCSEQ mismatch[139591710!=86360346]
DUL: Warning: Bad cache layer header file#=1, block#=6
 
...........and etc..........
 
 
QUOTE MISSING
 
 

If you get thefollowing error it is caused by the data dictionary tables

"USER$,OBJ$, TAB$ and COL$" not being correctly generated. To

fix this errorsimply delete all dictv6.ddl or dictv7.ddl created .dat

and .ctl filesand restart.

 
 
 
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:49:30 1997
 
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
 
 
DUL: Error: Quote missing

Salvagingdata from corrupt EXP dump files - UNEXP Tutorial

If you do not know anything about the structure of a EXPdump file this can be difficult. Here is a quick explanation. Apart from thefile header a dump file has MARKERS that identify the various sections. In eachtable section there will be SQL statements. The most interrsesting part is thecreate table statement, followed by the insert into table statement. The insertstatement is directly followed by the bind information, (number of columns, andfor each column its type and bind length and a small bit more). Then it isfollowed by the actual columns. Each column is preceded by a two byte length,followed by the actual column data. There are several tricks for longer columnspossible. The end of the column data is marked by the special length markerOXFFFF. There is no marker for the beginning of a row. Resynching after acorruption is trial and error. Corruption are generally not immediatedetectable. The format is slightly different for DIRECT export, so you willhave to use the DIRECT option for DIRECT exports. The offset to be specified isthe beginning of a row. In general the first one directly behind the bindarray, but for optimal flexibility you can start anywhere in the row data.

Thefirst step is to scan the dump file to find the offsets and the sql statements.Each output line starts with the offset where the item is found.

 
DUL>  scan dump file expdat.dmp;
0: CSET: 1 (US7ASCII)                # Character set info from the header
3: SEAL EXPORT:V10.02.01             # the Seal - the exp version tag
20: DBA SYSTEM                       # exp done as SYSTEM
8461: CONNECT SCOTT                  # section for user SCOTT
8475: TABLE "EMP"
                                     # complete create table staement
8487: CREATE TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))  PCTFREE 10 PCTUSED 40
INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS
 
                                     # Insert statement
8829: INSERT INTO "EMP" ("EMPNO", "ENAME", "JOB", "MGR", "HIREDATE",
"SAL", "COMM", "DEPTNO") VALUES (:1, :2, :3, :4, :5, :6, :7, :8)
 
                                     # BIND information
8957: BIND information for 8 columns
 col[  1] type 2 max length 22
 col[  2] type 1 max length 10 cset 31 (WE8ISO8859P1) form 1
 col[  3] type 1 max length 9 cset 31 (WE8ISO8859P1) form 1
 col[  4] type 2 max length 22
 col[  5] type 12 max length 7
 col[  6] type 2 max length 22
 col[  7] type 2 max length 22
 col[  8] type 2 max length 22
Conventional export                  # Conventional means NOT DIRECT
 
9003: start of table data            # Here begins the first row

Nowbuild an unexp statement from the create table statement and the direct/conventionalinformation and the start of the column data.

 
UNEXP TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))
dump file expdat.dmp from 9003;
 
Unloaded 14 rows, end of table marker at 9670 # so we have our famous 14 rows
 
 

 

This builds thenormal SQL*Loader file and a matching control file.

In the outputfile one extra column is added, this is related to the

status of therow. A P means the row is Partial, (some columns missing)

R means Resynch,it is the first row after a resynchronisation.

O means Overlap,the previous row had errors, but the new row

partlyoverlaps the other one.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值