环境:
hadoop-2.7.5
sqoop-1.4.7
zookeeper-3.4.10
hive-2.3.3 (使用mysql配置元数据库)
jdk1.8.0_151
oracle 11.2.0.3.0
经过一番baidu,总算初步成功,现在记录一下中间过程.
1.拷贝hive/conf/hive_site.xml到sqoop/conf目录
2.配置sqoop-evn.sh,将变量设置为对应的目录.
export HADOOP_COMMON_HOME=/home/hadoop/hadoop-2.7.5
export HADOOP_MAPRED_HOME=/home/hadoop/hadoop-2.7.5
export HIVE_HOME=/home/hadoop/hive-2.3.3
export ZOOCFGDIR=/home/hadoop/zookeeper-3.4.10
3.拷贝hive/lib/derby-10.10.2.0.jar到sqoop/lib
4.拷贝/home/hadoop/hadoop-2.7.5/share/hadoop/tools/lib/aws-java-sdk-1.7.4.jar到sqoop/lib
5.拷贝hive/lib/jackson-databind-2.6.5.jar, jackson-core-2.6.5.jar, jackson-annotations-2.6.0.jar到sqoop/lib
6.拷贝oracle11g ojdbc6.jar到sqoop/lib(从linux环境的oracle目录中拷贝)
7.修改/home/hadoop/jdk1.8.0_151/jre/lib/security/java.policy,增加 permission javax.management.MBeanTrustPermission "register";
8.修改/home/~/.bashrc,在最后增加 export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*
执行
sqoop import --connect jdbc:oracle:thin:@10.0.3.3:1521:sid --username abc --password r123 --table CMX.SALES -m 1 --hive-import --hive-overwrite --hive-database ra --hive-table "cmx_sales" --null-non-string '' --null-string '' --delete-target-dir --hive-drop-import-delims
该命令能够抽取oracle中cmx_sales表的数据到hive中,现在面临问题,hive中自动创建的表,字段类型与oracle的不一样,造成数据精度不一样.
比如oracle中字段为number(10),到hive中变为double,数据也从1000变成1000.0.
续待...