SQLLDR控制文件说明

Example 9-1 Sample Control File

-- This is a sample control file   

LOAD DATA

CHARACTERSET UTF8 LENGTH SEMANTICS BYTE

INFILE 'sample.dat'    

BADFILE 'sample.bad'    

DISCARDFILE 'sample.dsc' DISCARDMAX 100   

APPEND

INTO TABLE emp

WHEN (57) = '.'

TRAILING NULLCOLS

(hiredate SYSDATE,

deptno POSITION(1:2)  INTEGER EXTERNAL(2)  NULLIF deptno=BLANKS,

job POSITION(7:14)  CHAR TERMINATED BY WHITESPACE NULLIF job=BLANKS  "UPPER(:job)",

mgr POSITION(28:31) INTEGER EXTERNAL TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,

ename  POSITION(34:41) CHAR TERMINATED BY WHITESPACE  "UPPER(:ename)",

empno  POSITION(45) INTEGER EXTERNAL TERMINATED BY WHITESPACE,

sal    POSITION(51) CHAR  TERMINATED BY WHITESPACE "TO_NUMBER(:sal,'$99,999.99')",

comm   INTEGER EXTERNAL  ENCLOSED BY '(' AND '%' ":comm * 100")

  1. INFILE 'c:/topdir/subdir/datafile*.dat'

使用INFILE指定数据文件,数据文件可以使用通配符(*与?)

注你也可通过命令行选项DATA来指定,命令行选项会覆盖控制文件的INFILE选项。如果未指定,则默认数据文件为控制文件名并以.dat作扩展名

如果指定多个数据文件需要使用多个INFILE,每个数据文件对应一个INFILE

你可以对每个数据文件指定对应的bad file与discard file, 如

INFILE  mydat1.dat  BADFILE  mydat1.bad  DISCARDFILE mydat1.dis

INFILE  * 

INFILE  mydat3.dat  DISCARDFILE  mydat3.dis

INFILE  mydat4.dat  DISCARDMAX  10

如果使用INFILE *, 表示数据存于控制文件中,数据部分是以BEGINDATA开始的

使用BEGINDATA时保证此行不要有空格或其它字符,否则会被当成第一行数据

关于os_file_proc_clause跟操作系统选项有关,windows可查下Oracle Database Platform Guide for Microsoft Windows for information about using the os_file_proc_clause on Windows systems.

INFILE可以指定数据文件如何标识一条物理记录,有三种区分方法:fixed record format, variable record format, stream record format. 如果未指定格式,则默认stream record format.

A particular data file can be in fixed record format, variable record format, or stream record format. The record format can be specified in the control file with the INFILE parameter. If no record format is specified, then the default is stream record format.

Fixed Record Format

指定物理记录一行多少byte

A file is in fixed record format when all records in a data file are the same byte length.

Although this format is the least flexible, it results in better performance than variable or stream format.

语法:INFILE datafile_name "fix n"   --每条记录的长度bytes

Example

数据文件(用.表示空格):

396,...ty,.4922,beth,\n

68773,ben,.

1,.."dave",

5455,mike,.

控制文件:

load data

infile 'exp1.dat' "fix 11"

into table example

fields terminated by ',' optionally enclosed by '"'

(col1, col2)

上面导入后实际逻辑数据为5条

The first physical record is 396,...ty,. which is exactly eleven bytes (assuming a single-byte character set). The second record is 4922,beth, followed by the newline character (\n) which is the eleventh byte, and so on.

(Newline characters are not required with the fixed record format; it is simply used here to illustrate that if used, it counts as a byte in the record length.)

Note that the length is always interpreted in bytes, even if character-length semantics are in effect for the file.

Variable Record Format

This format provides some added flexibility over the fixed record format and a performance advantage over the stream record format.

语法 INFILE "datafile_name" "var n"

N表示一条记录的前n字节代表此记录的长度,即n字节后才是实际数据   

n specifies the number of bytes in the record length field. If n is not specified, then SQL*Loader assumes a length of 5 bytes. Specifying n larger than 40 results in an error.

数据文件(用.表示空格):

009.396,.ty,0104922,beth,

012..68773,ben,

控制文件:

load data

infile 'exp2.dat' "var 3"

into table example

fields terminated by ',' optionally enclosed by '"'

(col1 char(5), col2 char(7))

这里前3个字符表示一条记录的升度,009表示第一行长9bytes,即".396,.ty,", 第二条记录为10bytes, 即"4922,beth,"; 第三条记录为12bytes, 即"..68773,ben,"

the record's first three bytes indicate the length of the field. The example2.dat data file consists of three physical records. The first is specified to be 009 (9) bytes long, the second is 010 (10) bytes long (plus a 1-byte newline), and the third is 012 (12) bytes long (plus a 1-byte newline). Note that newline characters are not required with the variable record format. This example also assumes a single-byte character set for the data file.

Stream Record Format

用于指定行结束符,结束符不会算到字段上

A file is in stream record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator. Stream record format is the most flexible format, but there can be a negative effect on performance.

语法INFILE datafile_name ["str terminator_string"]

The terminator_string is specified as either 'char_string' or X'hex_string' where:

'char_string' is a string of characters enclosed in single or double quotation marks

X'hex_string' is a byte string in hexadecimal format

When the terminator_string contains special (nonprintable) characters, it should be specified as an X'hex_string'. However, some nonprintable characters can be specified as ('char_string') by using a backslash. For example:

\n indicates a line feed

\t indicates a horizontal tab

\f indicates a form feed

\v indicates a vertical tab

\r indicates a carriage return

If the character set specified with the NLS_LANG initialization parameter for your session is different from the character set of the data file, then character strings are converted to the character set of the data file. This is done before SQL*Loader checks for the default record terminator.Hexadecimal strings are assumed to be in the character set of the data file, so no conversion is performed.

On UNIX-based platforms, if no terminator_string is specified, then SQL*Loader defaults to the line feed character, \n.

On Windows-based platforms, if no terminator_string is specified, then SQL*Loader uses either \n or \r\n as the record terminator, depending on which one it finds first in the data file. This means that if you know that one or more records in your data file has \n embedded in a field, but you want \r\n to be used as the record terminator, then you must specify it.

Example:

load data

infile 'exp3.dat'  "str '|\n'"

into table example

fields terminated by ',' optionally enclosed by '"'

(col1 char(5), col2 char(7))

数据文件:

396,ty,|

4922,beth,|

导入结果

COL1  COL2

----- -------

396   ty

4922  beth

  1. OPTIONS (BINDSIZE=100000, SILENT=(ERRORS, FEEDBACK) )

命令行的选项会覆盖options中的值,可以在options中指定以下命令行选项:

BINDSIZE = n

COLUMNARRAYROWS = n

DATE_CACHE = n

DEGREE_OF_PARALLELISM= {degree-num|DEFAULT|AUTO|NONE}

DIRECT = {TRUE | FALSE}

EMPTY_LOBS_ARE_NULL = {TRUE | FALSE}

ERRORS = n

EXTERNAL_TABLE = {NOT_USED | GENERATE_ONLY | EXECUTE}

FILE = tablespace file

LOAD = n

MULTITHREADING = {TRUE | FALSE}

PARALLEL = {TRUE | FALSE}

READSIZE = n

RESUMABLE = {TRUE | FALSE}

RESUMABLE_NAME = 'text string'

RESUMABLE_TIMEOUT = n

ROWS = n

SDF_PREFIX = string

SILENT = {HEADER | FEEDBACK | ERRORS | DISCARDS | PARTITIONS | ALL}

SKIP = n   

SKIP_INDEX_MAINTENANCE = {TRUE | FALSE}

SKIP_UNUSABLE_INDEXES = {TRUE | FALSE}

STREAMSIZE = n

TRIM= {LRTRIM|NOTRIM|LTRIM|RTRIM|LDRTRIM}

  1. DEFAULT EXPRESSION CACHE n

用于指定一次取多少个默认值,默认为100,这在使用sequence作默认值表达式时可以提升性能

DEFAULT EXPRESSION CACHE n clause to specify how many default expressions are evaluated at a time by the direct path load. The default value is 100.

Using the DEFAULT EXPRESSION CACHE clause can significantly improve performance when default column expressions that include sequences are evaluated.

At the end of the load there may be sequence numbers left in the cache that never get used. This can happen when the number of rows to load is not a multiple of n. If you require no loss of sequence numbers, then specify a value of 1 for this clause.

  1. FIELD NAMES {FIRST FILE|FIRST FILE IGNORE|ALL FILES|ALL FILES IGNORE|NONE}

用于是否根据数据文件的第一条记录判断字段顺序

FIRST FILE:

即只在第一个数据文件中的第一行记录为字段名,在导入数据时会忽略第一行,此选项在数据文件字段与表字段顺序不同时,或是数据文件字段数量与表字段数量不同时使用Indicates that the first data file contains a list of field names for the data in the first record. This list uses the same delimiter as the data in the data file. The record is read for setting up the mapping between the fields in the data file and the columns in the target table. The record is skipped when the data is processed. This can be useful if the order of the fields in the data file is different from the order of the columns in the table, or if the number of fields in the data file is different from the number of columns in the target table

FIRST FILE IGNORE:

即第一个数据文件的第一行记录会被忽略,不起作用Indicates that the first data file contains a list of field names for the data in the first record, but that the information should be ignored. The record will be skipped when the data is processed, but it will not be used for setting up the fields.

ALL FILES:

所有数据文件中的第一行记录为字段名Indicates that all data files contain a list of field names for the data in the first record. The first record is skipped in each data file when the data is processed. The fields can be in a different order in each data file. SQL*Loader sets up the load based on the order of the fields in each data file.

ALL FILES IGNORE:

Indicates that all data files contain a list of field names for the data in the first record, but that the information should be ignored. The record is skipped when the data is processed in every data file, but it will not be used for setting up the fields.

NONE: Indicates that the data file contains normal data in the first record. This is the default.

  1. FIELDS CSV [WITH EMBEDDED|WITHOUT EMBEDDED] [FIELDS TERMINATED BY ','] [OPTIONALLY ENCLOSED BY '"']

对于数据文件以常规回车字符串的格式文件(如在UNIX/Linux上为\n,在Windows上为\ n或\ r \n)使用

This assumes that the file is a stream record format file with the normal carriage return string (for example, \n on UNIX or Linux operating systems and either \n or \r\n on Windows operating systems). Record terminators can be included (embedded) in data values.

以下使用FIELDS CSV使用特点:

  1. The WITH EMBEDDED and WITHOUT EMBEDDED options specify whether record terminators are included (embedded) within any fields in the data.If WITH EMBEDDED is used, then embedded record terminators must be enclosed, and intra-datafile parallelism is disabled for external table loads.
  2. The TERMINATED BY ',' and OPTIONALLY ENCLOSED BY '"' options are the defaults and do not have to be specified. You can override them with different termination and enclosure characters.
  3. When the CSV clause is used, only delimitable data types are allowed as control file fields. Delimitable data types include CHAR, datetime, interval, and numeric EXTERNAL. The TERMINATED BY and ENCLOSED BY clauses cannot be used at the field level when the CSV clause is specified.
  4. When the CSV clause is specified, normal SQL*Loader blank trimming is done by default. You can specify PRESERVE BLANKS to avoid trimming of spaces. Or, you can use the SQL functions LTRIM and RTRIM in the field specification to remove left and/or right spaces.
  5. When the CSV clause is specified, the INFILE * clause in not allowed. This means that there cannot be any data included in the SQL*Loader control file.

  1. NULLIF {=|!=}{"char_string"|x'hex_string'|BLANKS}

可以在table级别或字段级别指定NULLIF子句,但匹配或不匹配指定值时将变更为NULL并插入表中

SQL*Loader checks the specified value against the value of the field in the record. If there is a match using the equal or not equal specification, then the field is set to NULL for that row. Any field that has a length of 0 after blank trimming is also set to NULL.

注:The char_string and hex_string values must be enclosed in either single quotation marks or double quotation marks.

如果某段不想按表级别nullif执行可以在字段使用NO NULLIF

If you do not want the default NULLIF or any other NULLIF clause applied to a field, you can specify NO NULLIF at the field level.

  1. DATE FORMAT mask

TIMESTAMP FORMAT mask

TIMESTAMP WITH TIME ZONE mask

TIMESTAMP WITH LOCAL TIME ZONE mask

可以在表级别或字段级别指定datetime的格式,如下

LOAD DATA

INFILE myfile.dat

APPEND

INTO TABLE EMP

FIELDS TERMINATED BY ","

DATE FORMAT "DD-Month-YYYY"

(empno,ename,job, mgr,hiredate DATE,sal,comm,deptno,entrydate DATE)

  1. Assembling Logical Records from Physical Records
    1. Using CONCATENATE to Assemble Logical Records

CONCATENATE  integer

使用CONCATENATE可以把n行物理记录合并为1条逻辑记录

注在使用direct path导入时,是使用COLUMNARRAYROWS 来设置column arrays的行数,这个默认值很大,而这里的值设置很大可能意味着一行记录数据很大,最终导入超出内存分配大小,这时建议调小COLUMNARRAYROWS 的值来提高性能

    1. Using CONTINUEIF to Assemble Logical Records

CONTINUEIF是定义一个条件,满足时会把物理记录进行combine, 语法如下:

The CONTINUEIF clause is followed by a condition that is evaluated for each physical record, as it is read. For example, two records might be combined if a pound sign (#) were in byte position 80 of the first record. If any other character were there, then the second record would not be added to the first.

The positions in the CONTINUEIF clause refer to positions in each physical record. This is the only time you refer to positions in physical records. All other references are to logical records.

THIS: If the condition is true in the current record, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false. If the condition is false, then the current physical record becomes the last physical record of the current logical record. THIS is the default.

NEXT: If the condition is true in the next record, then the current physical record is concatenated to the current logical record, continuing until the condition is false.

Operator: The supported operators are equal (=) and not equal (!= or <>).For the equal operator, the field and comparison string must match exactly for the condition to be true. For the not equal operator, they can differ in any character.

LAST: This test is similar to THIS, but the test is always against the last nonblank character. If the last nonblank character in the current physical record meets the test, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false. If the condition is false in the current record, then the current physical record is the last physical record of the current logical record. LAST allows only a single character-continuation field (as opposed to THIS and NEXT, which allow multiple character-continuation fields).

pos_spec: Specifies the starting and ending column numbers in the physical record.

Column numbers start with 1. Either a hyphen or a colon is acceptable (start-end or start:end).

If you omit end, then the length of the continuation field is the length of the byte string or character string. If you use end, and the length of the resulting continuation field is not the same as that of the byte string or the character string, then the shorter one is padded. Character strings are padded with blanks, hexadecimal strings with zeros.

Str: A string of characters to be compared to the continuation field defined by start and end, according to the operator. The string must be enclosed in double or single quotation marks. The comparison is made character by character, blank padding on the right if necessary.

X'hex-str': A string of bytes in hexadecimal format used in the same way as str. X'1FB033' would represent the three bytes with values 1F, B0, and 33 (hexadecimal).

PRESERVE: Includes 'char_string' or X'hex_string' in the logical record. The default is to exclude them.

For CONTINUEIF THIS and CONTINUEIF LAST, if the PRESERVE parameter is not specified, then the continuation field is removed from all physical records when the logical record is assembled. That is, data values are allowed to span the records with no extra characters (continuation characters) in the middle. For example, if CONTINUEIF THIS(3:5)='***' is specified, then positions 3 through 5 are removed from all records. This means that the continuation characters are removed if they are in positions 3 through 5 of the record. It also means that the characters in positions 3 through 5 are removed from the record even if the continuation characters are not in positions 3 through 5.

For CONTINUEIF THIS and CONTINUEIF LAST, if the PRESERVE parameter is used, then the continuation field is kept in all physical records when the logical record is assembled.

感觉上面应该说的是continueif this and continueif next, 下同才是continue last:

CONTINUEIF LAST differs from CONTINUEIF THIS and CONTINUEIF NEXT. For CONTINUEIF LAST, where the positions of the continuation field vary from record to record, the continuation field is never removed, even if PRESERVE is not specified.

Example1:

数据文件:

%%aaaaaaaa....

%%bbbbbbbb....

..cccccccc....

%%dddddddddd..

%%eeeeeeeeee..

..ffffffffff..

以下表示如果第1和第2个列位置为%%,则符合组合条件,将与下一行进行组合:

CONTINUEIF THIS (1:2) = '%%'

以下为实际插入数据:

aaaaaaaa....bbbbbbbb....cccccccc....

dddddddddd..eeeeeeeeee..ffffffffff..

如果使用了preserve:

CONTINUEIF THIS PRESERVE (1:2) = '%%'

以下为实际插入数据:

%%aaaaaaaa....%%bbbbbbbb......cccccccc....

%%dddddddddd..%%eeeeeeeeee....ffffffffff..

Example2:

数据文件为:

..aaaaaaaa....

%%bbbbbbbb....

%%cccccccc....

..dddddddddd..

%%eeeeeeeeee..

%%ffffffffff..

使用next

CONTINUEIF NEXT (1:2) = '%%'

结果为

aaaaaaaa....bbbbbbbb....cccccccc....

dddddddddd..eeeeeeeeee..ffffffffff..

  1. Loading Logical Records into Tables

9.1 INTO TABLE clause

INTO TABLE子句可以指定导入的表,表必须存在


可以指定导入的某个表的特定方法(INSERTAPPENDREPLACE, or TRUNCATE),它会覆盖全局导入方法设置,全局默认为INSERT

9.2 Table-Specific OPTIONS Parameter

INTO TABLES中的Options只用于direct path的parallel:

9.3 Loading Records Based on a Condition

使用when子句来进行过滤记录,可以使用多个条件

注when只能用AND连接多连接不能使用OR,操作符只能为=/!=不能用大于或小于,当然也没IS NULL。


如下面表示第5列位置为q的则导入

WHEN (5) = 'q'

下面为使用多条件

WHEN (deptno = '10') AND (job = 'SALES')

Note:If a record with a LOBFILE or SDF is discarded, then SQL*Loader skips the corresponding data in that LOBFILE or SDF.

9.4 Specifying Default Data Delimiters

If all data fields are terminated similarly in the data file, then you can use the FIELDS clause to indicate the default termination and enclosure delimiters.

这里设置的选项会覆盖列级别指定的选项

You can override the delimiter for any given column by specifying it after the column name.

  1. fields_spec


2) termination_spec


Note:Terminator strings can contain one or more characters. Also, TERMINATED BY EOF applies only to loading LOBs from a LOBFILE.

3) enclosure_spec

9.5 Handling Short Records with Missing Data

控制文件指定的列比数据文件中实际列数量多时是报错还是插入NULL

When the control file definition specifies more fields for a record than are present in the record, SQL*Loader must determine whether the remaining (specified) columns should be considered null or whether an error should be generated.

情况1使用绝对位置:

If the control file definition explicitly states that a field's starting position is beyond the end of the logical record, then SQL*Loader always defines the field as null.


情况2使用相对位置:

If a field is defined with a relative position (such as dname and loc in the following example), and the record ends before the field is found, then SQL*Loader could either treat the field as null or generate an error. SQL*Loader uses the presence or absence of the TRAILING NULLCOLS clause to determine the course of action.

如数据文件如下:

10 Accounting

控制文件设置如下,这里loc会插入NULL, 如果不指定TRAILING NULLCOLS则会报错

INTO TABLE dept

TRAILING NULLCOLS

( deptno CHAR TERMINATED BY " ",

  dname  CHAR TERMINATED BY WHITESPACE,

  loc    CHAR TERMINATED BY WHITESPACE

)

  1. Index Options

用于控制索引如何创建

10.1 SORTED INDEXES Clause

The SORTED INDEXES clause applies to direct path loads. It tells SQL*Loader that the incoming data has already been sorted on the specified indexes, allowing SQL*Loader to optimize performance.

10.2 SINGLEROW Option

The SINGLEROW option is intended for use during a direct path load with APPEND on systems with limited memory, or when loading a small number of records into a large table. This option inserts each index entry directly into the index, one record at a time.

By default, SQL*Loader does not use SINGLEROW to append records to a table. Instead, index entries are put into a separate, temporary storage area and merged with the original index at the end of the load. This method achieves better performance and produces an optimal index, but it requires extra storage space. During the merge operation, the original index, the new index, and the space for new entries all simultaneously occupy storage space.

With the SINGLEROW option, storage space is not required for new index entries or for a new index. The resulting index may not be as optimal as a freshly sorted one, but it takes less space to produce. It also takes more time because additional UNDO information is generated for each index insert. This option is suggested for use when either of the following situations exists:

  1. Available storage is limited.
  2. The number of records to be loaded is small compared to the size of the table (a ratio of 1:20 or less is recommended).

  1. Benefits of Using Multiple INTO TABLE Clauses

多表插入的好处:

Load data into different tables

Extract multiple logical records from a single input record

Distinguish different input record formats

Distinguish different input row object subtypes

multiple INTO TABLE导入的多表使用的data file记录是连续的,而不是分别从头开始,即只会scan一次数据文件,把里面的记录按顺序导入,每个表有一部分数据

A key point when using multiple INTO TABLE clauses is that field scanning continues from where it left off when a new INTO TABLE clause is processed. The remainder of this section details important ways to make use of that behavior. It also describes alternative ways of using fixed field locations or the POSITION parameter.

11.1 Extracting Multiple Logical Records

当数据文件的1条物理记录对应表的多条记录时,可以分别插入同一表完成导入:

数据文件:

1119 Smith     1120 Yvonne

1121 Albert     1130 Thomas

控制文件:

INTO TABLE emp

     (empno POSITION(1:4)  INTEGER EXTERNAL,

      ename POSITION(6:15) CHAR)

INTO TABLE emp

     (empno POSITION(17:20) INTEGER EXTERNAL,

      ename POSITION(21:30) CHAR)

上面也可以使用相对位置,下面指定字面是以单空格分隔,而行分隔为多个不确定数量的空格或TAB(WHITESPACE)分隔:

INTO TABLE emp

     (empno INTEGER EXTERNAL TERMINATED BY " ",

      ename CHAR             TERMINATED BY WHITESPACE)

INTO TABLE emp

     (empno INTEGER EXTERNAL TERMINATED BY " ",

      ename CHAR)            TERMINATED BY WHITESPACE)

The important point in this example is that the second empno field is found immediately after the first ename, although it is in a separate INTO TABLE clause. Field scanning does not start over from the beginning of the record for a new INTO TABLE clause. Instead, scanning continues where it left off.

11.2 Distinguishing Different Input Record Formats

数据文件格式可能不是统一的,而是有几种,如下面数据文件有两种形式,部门记录以1开头,雇员记录以2开头

1 50   Manufacturing       — DEPT record

2 1119 Smith      50       — EMP record

2 1120 Snyder     50

1 60   Shipping

2 1121 Stevens    60

使用的控制文件:

INTO TABLE dept

   WHEN recid = 1

   (recid  FILLER POSITION(1:1)  INTEGER EXTERNAL,

    deptno POSITION(3:4)  INTEGER EXTERNAL,

    dname  POSITION(8:21) CHAR)

INTO TABLE emp

   WHEN recid <> 1

   (recid  FILLER POSITION(1:1)   INTEGER EXTERNAL,

    empno  POSITION(3:6)   INTEGER EXTERNAL,

    ename  POSITION(8:17)  CHAR,

    deptno POSITION(19:20) INTEGER EXTERNAL)

上面也可以使用相对位置:

INTO TABLE dept

   WHEN recid = 1

   (recid  FILLER INTEGER EXTERNAL TERMINATED BY WHITESPACE,

    deptno INTEGER EXTERNAL TERMINATED BY WHITESPACE,

    dname  CHAR TERMINATED BY WHITESPACE)

INTO TABLE emp

   WHEN recid <> 1

   (recid  FILLER POSITION(1) INTEGER EXTERNAL TERMINATED BY ' ',

    empno  INTEGER EXTERNAL TERMINATED BY ' '

    ename  CHAR TERMINATED BY WHITESPACE,

    deptno INTEGER EXTERNAL TERMINATED BY ' ')

11.3 Distinguishing Different Input Row Object Subtypes

A single data file may contain records made up of row objects inherited from the same base row object type.

For example, consider the following simple object type and object table definitions, in which a nonfinal base object type is defined along with two object subtypes that inherit their row objects from the base type:

CREATE TYPE person_t AS OBJECT

 (name    VARCHAR2(30),

  age     NUMBER(3)) not final;

CREATE TYPE employee_t UNDER person_t

 (empid   NUMBER(5),

  deptno  NUMBER(4),

  dept    VARCHAR2(30)) not final;

CREATE TYPE student_t UNDER person_t

 (stdid   NUMBER(5),

  major   VARCHAR2(20)) not final;

CREATE TABLE persons OF person_t;

The following input data file contains a mixture of these row objects subtypes. A type ID field distinguishes between the three subtypes. person_tobjects have a P in the first column, employee_t objects have an E, and student_t objects have an S.

P,James,31,

P,Thomas,22,

E,Pat,38,93645,1122,Engineering,

P,Bill,19,

P,Scott,55,

S,Judy,45,27316,English,

S,Karen,34,80356,History,

E,Karen,61,90056,1323,Manufacturing,

S,Pat,29,98625,Spanish,

S,Cody,22,99743,Math,

P,Ted,43,

E,Judy,44,87616,1544,Accounting,

E,Bob,50,63421,1314,Shipping,

S,Bob,32,67420,Psychology,

E,Cody,33,25143,1002,Human Resources,

The following control file uses relative positioning based on the POSITION parameter to load this data. Note the use of the TREAT AS clause with a specific object type name. This informs SQL*Loader that all input row objects for the object table will conform to the definition of the named object type.

Note:

Multiple subtypes cannot be loaded with the same INTO TABLE statement. Instead, you must use multiple INTO TABLEstatements and have each one load a different subtype.

INTO TABLE persons

REPLACE

WHEN typid = 'P' TREAT AS person_t

FIELDS TERMINATED BY ","

 (typid   FILLER  POSITION(1) CHAR,

  name            CHAR,

  age             CHAR)

INTO TABLE persons

REPLACE

WHEN typid = 'E' TREAT AS employee_t

FIELDS TERMINATED BY ","

 (typid   FILLER  POSITION(1) CHAR,

  name            CHAR,

  age             CHAR,

  empid           CHAR,

  deptno          CHAR,

  dept            CHAR)

INTO TABLE persons

REPLACE

WHEN typid = 'S' TREAT AS student_t

FIELDS TERMINATED BY ","

 (typid   FILLER  POSITION(1) CHAR,

  name            CHAR,

  age             CHAR,

  stdid           CHAR,

  major           CHAR)

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值