3.1把mysql中的employee表清空
在linux终端输入:
mysql -usqoop -p回车
密码
truncate table sqoop.employee;
mysql> select * from sqoop.employee;
Empty set (0.00 sec)
删除成功!!!
3.2从hive到mysql
在linux终端输入:
sqoop export --connect jdbc:mysql://localhost:3306/sqoop -username sqoop -password sqoop --table employee
--export-dir /user/hive/warehouse/employee --input-fields-terminated-by '\001'
说明:jdbc:mysql://localhost:3306/sqoop 3306是mysql的端口;localhost是mysql的安装ip地址,运行时根据自己的情况而定;sqoop是mysql中的数据库。
hive中字段的默认分割符为'\001'
运行过程的最后显示Export 3 records如下:
14/06/04 14:49:45 INFO mapreduce.ExportJobBase: Transferred 760 bytes in 11.8562 seconds (64.1017 bytes/sec)
14/06/04 14:49:45 INFO mapreduce.ExportJobBase: Exported 3 records.
查询mysql中的employee验证数据是否存在:
mysql> select * from sqoop.employee;
+-------------+---------------+
| employee_id | employee_name |
+-------------+---------------+
| 101 | zhangsan |
| 102 | lisi |
| 103 | wangwu |
+-------------+---------------+
3 rows in set (0.00 sec)
导入mysql成功!!!
额外说明:
因为mysql中的employee表的结构已经确定,employee_id为primary key,所以多次从hive导入相同的数据,只会第一成功。
操作如下,再一次在linux终端输入:
sqoop export --connect jdbc:mysql://localhost:3306/sqoop -username sqoop -password sqoop --table employee
--export-dir /user/hive/warehouse/employee --input-fields-terminated-by '\001'
运行过程显示:
14/06/04 15:14:34 INFO mapred.JobClient: Task Id : attempt_201406041140_0012_m_000001_1, Status : FAILED
Task attempt_201406041140_0012_m_000001_1 failed to report status for 600 seconds. Killing!
14/06/04 15:14:36 INFO mapred.JobClient: map 25% reduce 0%
14/06/04 15:14:37 INFO mapred.JobClient: Task Id : attempt_201406041140_0012_m_000002_1, Status : FAILED
Task attempt_201406041140_0012_m_000002_1 failed to report status for 600 seconds. Killing!