2、HDFS专用SQL连接器
使用ORACLE连接HDFS有个专项的解决方案[O]racle [D]irect [C]onnector for [H]adoop Distributed Files System,简称ODCH。
该软件包可以到Oracle官网下载:
选择第一项:Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0
我们这里使用2.1版本,下载到安装包后,解压即可:
[root@ora11g ~]# unzip oraosch-2.1.0.zip
变态呀,压缩包里还有压缩包:
[root@ora11g ~]# unzip orahdfs-2.1.0.zip -d /usr/local/[root@ora11g ~]# chown -R oracle:oinstall /usr/local/orahdfs-2.1.0
在解压缩文件的bin中,hdfs_stream有两个环境变量需要手动指定,编辑该文件:
[root@ora11g ~]# vi /usr/local/orahdfs-2.1.0/bin/hdfs_stream
增加两行环境变量:
OSCH_HOME=/usr/local/orahdfs-2.1.0HADOOP_HOME=/usr/local/hadoop-0.20.2
再次编辑grid用户环境变量:
[root@ora11g ~]$ vi /home/grid/.bash_profile
增加下列内容:
export DIRECTHDFS_HOME=/usr/local/orahdfs-2.1.0
export OSCH_HOME=${DIRECTHDFS_HOME}
export ORAHDFS_JAR=${DIRECTHDFS_HOME}/jlib/orahdfs.jarexport HDFS_BIN_PATH=${DIRECTHDFS_HOME}/bin
以oracle用户身份登录,创建两个目录,分别用于保存ODCH的操作日志和位置文件(location files):
[oracle@ora11g ~]$ mkdir /data/ora11g/ODCH/{logs,extdir} -p[oracle@ora11g ~]$ chmod -R 777 /data/ora11g/ODCH/
而后以sysdba身份登录到ORACLE,创建3个directory对象:
SQL> create or replace directory ODCH_LOG_DIR as '/data/ora11g/ODCH/logs';
grant read, write on directory ODCH_LOG_DIR to SCOTT;
create or replace directory ODCH_DATA_DIR as '/data/ora11g/ODCH/extdir';
grant read, write on directory ODCH_DATA_DIR to SCOTT;
create or replace directory HDFS_BIN_PATH as '/usr/local/orahdfs-2.1.0/bin';
grant read,write,execute on directory HDFS_BIN_PATH to SCOTT;
Directory created.
SQL>
Grant succeeded.
SQL>
Directory created.
SQL>
Grant succeeded.
SQL>
Directory created.
SQL>
Grant succeeded.HDFS_BIN_PATH: hdfs_stream脚本所在目录.
XTAB_DATA_DIR:用来存放“位置文件”(location files)的目录。“位置文件”(location files) 是一个配置文件,里面包含HDFS的文件路径/文件名以及文件编码格式。
ODCH_LOG_DIR, Oracle用来存放外部表的log/bad等文件的目录.
创建外部表,注意location目前是随便写的,随后还要再修改:
SQL> conn scott/tiger
Connected.
SQL> CREATE TABLE odch_ext_table
2 ( ID NUMBER
3 ,OWNER VARCHAR2(128)
4 ,NAME VARCHAR2(128)
5 ,MODIFIED DATE
6 ,Val NUMBER
7 ) ORGANIZATION EXTERNAL
8 (TYPE oracle_loader
9 DEFAULT DIRECTORY "ODCH_DATA_DIR"
10 ACCESS PARAMETERS
11 (
12 records delimited by newline
13 preprocessor HDFS_BIN_PATH:hdfs_stream
14 badfile ODCH_LOG_DIR:'odch_ext_table%a_%p.bad'
15 logfile ODCH_LOG_DIR:'odch_ext_table%a_%p.log'
16 fields terminated by ',' OPTIONALLY ENCLOSED BY '"'
17 missing field values are null
18 (
19 ID DECIMAL EXTERNAL,
20 OWNER CHAR(200),
21 NAME CHAR(200),
22 MODIFIED CHAR DATE_FORMAT DATE MASK "YYYY-MM-DD HH24:MI:SS",
23 Val DECIMAL EXTERNAL
24 )
25 )
26 LOCATION ('odch/tmpdata.csv')
27 ) PARALLEL REJECT LIMIT UNLIMITED;
Table created..
切换至grid用户,创建hdfs中的目录并上传文件到hdfs:
[grid@ora11g ~]$ hadoop dfs -mkdir odch[grid@ora11g ~]$ hadoop dfs -put tmpdata.csv odch/
Tmpdata.csv文件是我们通过all_objects生成的,SQL脚本为:select rownum,owner,object_name,created,data_object_id from all_objects
然后,通过ODCH的jar包,生成位置文件:
[grid@ora11g ~]$ hadoop jar \
> ${ORAHDFS_JAR} oracle.hadoop.hdfs.exttab.ExternalTable \
> -D oracle.hadoop.hdfs.exttab.tableName=odch_ext_table \
> -D oracle.hadoop.hdfs.exttab.datasetPaths=odch \
> -D oracle.hadoop.hdfs.exttab.datasetRegex=tmpdata.csv \
> -D oracle.hadoop.hdfs.exttab.connection.url="jdbc:oracle:thin:@//192.168.30.244:1521/jssdb" \
> -D oracle.hadoop.hdfs.exttab.connection.user=SCOTT \
> -publish
DEPRECATED: The class oracle.hadoop.hdfs.exttab.ExternalTable is deprecated.
It is replaced by oracle.hadoop.exttab.ExternalTable.
Oracle SQL Connector for HDFS Release 2.1.0 - Production
Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.
[Enter Database Password:]
The publish command succeeded.
ALTER TABLE "SCOTT"."ODCH_EXT_TABLE"
LOCATION
(
'osch-20130516031513-6000-1'
);
The following location files were created.
osch-20130516031513-6000-1 contains 1 URI, 4685141 bytes
4685141 hdfs://hdnode1:9000/user/grid/odch/tmpdata.csv
The following location files were deleted.
odch/tmpdata.csv not deleted. It was not created by OSCH.
其中,-D:指定相关参数tableName:外部表名字
datasetPaths:源数据存放路径(HDFS)
datasetRegex:数据文件名
connection.url:oracle数据库连接串
connection.user:数据库用户名scott
这个生成的osch-20130516031513-6000-1就是所谓的位置文件,真正指明我们的目标数据文件,在HDFS保存的位置,查看下文件内容就明白了:
[root@ora11g ~]# more /data/ora11g/ODCH/extdir/osch-20130516031513-6000-1
1.0
osch-20130516031513-6000-1
2013-05-16T13:54:02
2013-05-16T03:15:13
Oracle SQL Connector for HDFS Release 2.1.0 - Production
2.1.0
hdfs://hdnode1:9000/user/grid/odch/tmpdata.csv
根据提示修改odch_ext_table外部表读取的文件:
SQL> ALTER TABLE "SCOTT"."ODCH_EXT_TABLE"
2 LOCATION
3 (
4 'osch-20130516031513-6000-1'
5 );
Table altered.
SQL> set line 150 pages 1000;
SQL> col owner for a10
SQL> col name for a20
SQL> select * from odch_ext_table where rownum<10;
ID OWNER NAME MODIFIED VAL
---------- ---------- -------------------- ------------ ----------
1 SYS ICOL$ 15-MAY-13 2
2 SYS I_USER1 15-MAY-13 46
3 SYS CON$ 15-MAY-13 28
4 SYS UNDO$ 15-MAY-13 15
5 SYS C_COBJ# 15-MAY-13 29
6 SYS I_OBJ# 15-MAY-13 3
7 SYS PROXY_ROLE_DATA$ 15-MAY-13 25
8 SYS I_IND1 15-MAY-13 41
9 SYS I_CDEF2 15-MAY-13 54
9 rows selected.
数据被读取出来了,竣工。