怎么通过sqoop导入导出到MySQL


sqoop  mysql 导入,导出
1.
安装(前提hadoop启动)

[hadoop@h91 ~]$ tar -zxvf sqoop-1.3.0-cdh3u5.tar.gz
[hadoop@h91 hadoop-0.20.2-cdh3u5]$ cp hadoop-core-0.20.2-cdh3u5.jar /home/hadoop/sqoop-1.3.0-cdh3u5/lib/

[hadoop@h91 ~]$ cp ojdbc6.jar sqoop-1.3.0-cdh3u5/lib/

[hadoop@h91 ~]$ vi .bash_profile
添加
export SQOOP_HOME=/home/hadoop/sqoop-1.3.0-cdh3u5

[hadoop@h91 ~]$ source .bash_profile

2.

[hadoop@h91 ~]$ cd sqoop-1.3.0-cdh3u5/bin/

[hadoop@h91 bin]$ vi configure-sqoop
注释掉hbase和zookeeper检查
## Moved to be a runtime check in sqoop.
#if [ ! -d "${HBASE_HOME}" ]; then
#  echo "Warning: $HBASE_HOME does not exist! HBase imports will fail."
#  echo 'Please set $HBASE_HOME to the root of your HBase installation.'
#fi


3.mysql 授权

mysql> insert into mysql.user(Host,User,Password) values("localhost","sqoop",password("sqoop"));
mysql> flush privileges;
mysql> grant all privileges on *.* to 'sqoop'@'%' identified by 'sqoop' with grant option;
mysql> flush privileges;

mysql> use test;
mysql> create table sss (id int,name varchar(10));

mysql> insert into sss values(1,'zs');
mysql> insert into sss values(2,'ls');




4.测试sqoop能否连接上mysql
[hadoop@h91 mysql-connector-java-5.0.7]$ cp mysql-connector-java-5.0.7-bin.jar /home/hadoop/sqoop-1.3.0-cdh3u5/lib/

[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop list-tables --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456
(显示有sss表)

5.将mysql中的sqoop用户下sss 导入到HDFS中
[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop import --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456 --table sss -m 1
(-m 为并行  默认并行度为4)

[hadoop@h91 hadoop-0.20.2-cdh3u5]$ bin/hadoop fs -ls
多出个sss目录
[hadoop@h91 hadoop-0.20.2-cdh3u5]$ bin/hadoop fs -ls /user/hadoop/sss
[hadoop@h91 hadoop-0.20.2-cdh3u5]$ bin/hadoop fs -cat /user/hadoop/sss/part-m-00000
看到 sss表内容
1,zs
2,ls

6.从HDFS导入到mysql中

mysql> delete from sss;

[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop export --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456 --table sss --export-dir hdfs://h1:9000/user/hadoop/sss/part-m-00000

[root@o222 ~]# mysql -usqoop -p
mysql> use test
mysql> select * from sss;
表中的数据 又 回来了


---------------------------------------------------------------
增量抽取 mysql数据 到HDFS中
类型1(字段增长)
bin/sqoop import --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456 --table sss -m 1 --target-dir /user/hadoop/a --check-column id --incremental append --last-value 3


类型2(时间增长)
mysql> create table s2 (id int,sj timestamp not null default current_timestamp);
mysql> insert into s2 (id)values(123);
mysql> insert into s2 (id)values(321);
mysql> select * from s2;
+------+---------------------+
| id   | sj                  |
+------+---------------------+
|  123 | 2015-11-20 22:34:09 |
|  321 | 2015-11-20 22:34:23 |
+------+---------------------+

bin/sqoop import --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456 --table s2 -m 1 --target-dir /user/hadoop/abc --incremental lastmodified --check-column sj --last-value '2015-11-20 22:34:15'
结果:只有id为321的行更新了
---------------------------------------------------------
配置 sqoop导入的
[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop export --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456 --table qqq --export-dir hdfs://h1:9000/user/hadoop/qqq.txt --input-fields-terminated-by '\t'

****--input-fields-terminated-by '\t'    
 声明分隔符*******


----------------------------------------------------------
sqoop eval工具:
sqoop下 使用sql语句对 关系型数据库进行操作


[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop eval --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456 --query "select * from sss"

[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop eval --connect jdbc:mysql://192.168.18.111:3306/test --username sqoop --password 123456 --query "insert into sss values(3,'ww')"

==============================================================
sqoop hive
mysql 导入到 hive中

mysql> select * from sss;
+------+------+
| id   | name |
+------+------+
|    1 | zs   |
|    2 | ls   |
|    3 | ww   |
|   10 | haha |
+------+------+


hive> create table tb1(id int,name string)
     row format delimited
     fields terminated by ',';

**** sqoop 默认的分隔符为"," ***********


[hadoop@h101 sqoop-1.3.0-cdh3u5]$ bin/sqoop import --connect jdbc:mysql://192.168.18.111/test --username sqoop --password 123456 --table sss  --hive-import -m 1 --hive-table tb1 --fields-terminated-by ','

hive> select * from tb1;





============================================================
sqoop oracle

1.

[hadoop@h91 ~]$ cp ojdbc6.jar sqoop-1.3.0-cdh3u5/lib/

 测试sqoop 连接oracle
[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop list-tables --connect jdbc:oracle:thin:@192.168.8.222:1521:TEST --username scott --password abc

2.
导出到HDFS中
[hadoop@h91 sqoop-1.3.0-cdh3u5]$ bin/sqoop import --connect jdbc:oracle:thin:@192.168.8.222:1521:TEST --username SCOTT --password abc --verbose -m 1 --table S1
(表名字 和 用户名要大写 )

HDFS 中查看
[hadoop@h91 hadoop-0.20.2-cdh3u5]$ bin/hadoop fs -cat /user/hadoop/S1/part-m-00000
101,zhangsan
102,lisi


================================================================
1.mysql (t2表  必须id为主键)
mysql> select * From t2;
+----+-------+-------+-------+
| id | name1 | name2 | name3 |
+----+-------+-------+-------+
|  1 | aa    | bb    | cc    |
|  2 | aaa   | bbb   | ccc   |
+----+-------+-------+-------+

2.vi .bash_profile
export HBASE_HOME=/home/hadoop/hbase-0.90.6-cdh3u5

3.
 bin/sqoop import --connect jdbc:mysql://192.168.8.101:3306/test --username sqoop --password sqoop  --table t3 --hbase-table qqq  --column-family ccf --hbase-row-key id --hbase-create-table











  • 1
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值