sqoop-1.4.同步数据到hdfs

1.显示所有数据库

sqoop list-databases -connect jdbc:mysql://192.168.1.133:3306/ --username root -password root

2. 显示所有表

sqoop list-tables --connect jdbc:mysql://192.168.1.133:3306/ue_incas --username root -P

3. 数据从mysql导入hdfs

sqoop import --connect jdbc:mysql://192.168.1.133:3306/ue_incas --username root --P  --table sys_param --target-dir /mysqltohdfs2/sys_param -m 4

mysql导入hdfs时注意事项:

(1) --target-dir /mysqltohdfs2/sys_param 中的 “sys_param”目录不能存在;

(2) 异常信息:

 Application application_1531365865313_0001 failed 2 times due to AM Container for appattempt_1531365865313_0001_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://localhost:8088/cluster/app/application_1531365865313_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1531365865313_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
at org.apache.hadoop.util.Shell.run(Shell.java:482)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1

Failing this attempt. Failing the application. 

解决方案:

(1)检查 core-site.xml、mapred-site.xml中配置的文件路径是否正确

mapred-site.xml中的日志路径:

<property>  
    <name>yarn.app.mapreduce.am.staging-dir</name>  
    <value>/hadoop_log</value>  

</property>

中的 “/hadoop_log” 是通过 hdfs dfs -mkdir /hadoop_log 命令创建的,非linux中文件系统的路径

(2)yarn-site.xml中的参数 yarn.application.classpath 需要注释

 <!--
<property>
  <name>yarn.application.classpath</name>
      <value>
           /opt/modules/hadoop/hadoop-2.7.6/*,
           /opt/modules/hadoop/hadoop-2.7.6/etc/*,
           /opt/modules/hadoop/hadoop-2.7.6/etc/hadoop/*,
           /opt/modules/hadoop/hadoop-2.7.6/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/common/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/common/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/mapreduce/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/mapreduce/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/hdfs/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/hdfs/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/yarn/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/yarn/lib/*
      </value>
</property>

-->

4. hadoop-2.7.6的配置文件

 (1) core-site.xml

<configuration>
<property>
 <name>fs.defaultFS</name>
 <value>hdfs://localhost.localdomain:8020</value>
</property>

<property>
 <name>hadoop.tmp.dir</name>
 <value>/opt/modules/hadoop-2.7.6/tmp</value>
</property>

<property>
  <name>hadoop.proxyuser.hadoop.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hadoop.groups</name>
  <value>*</value>
</property>

</configuration>


(2) hdfs-site.xml

<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>
</configuration>

(3) yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
</property>

<property>
 <name>yarn.resourcemanager.hostname</name>
 <value>localhost.localdomain</value>
</property>

<!--
<property>
  <name>yarn.application.classpath</name>
      <value>
           /opt/modules/hadoop/hadoop-2.7.6/*,
           /opt/modules/hadoop/hadoop-2.7.6/etc/*,
           /opt/modules/hadoop/hadoop-2.7.6/etc/hadoop/*,
           /opt/modules/hadoop/hadoop-2.7.6/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/common/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/common/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/mapreduce/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/mapreduce/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/hdfs/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/hdfs/lib/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/yarn/*,
           /opt/modules/hadoop/hadoop-2.7.6/share/hadoop/yarn/lib/*
      </value>
</property>
-->

</configuration>


(4)  mapred-site.xml

<configuration>
<property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

<property>  
 <name>mapreduce.jobhistory.address</name>  
 <value>localhost.localdomain:10020</value>  
</property>

<property>
 <name>mapreduce.jobhistory.webapp.address</name>
 <value>localhost.localdomain:19888</value>
</property>

<property>  
    <name>yarn.app.mapreduce.am.staging-dir</name>  
    <value>/hadoop_log</value>  
</property>
</configuration>

5. sqoop1.4.X 配置文件sqoop-env.sh:

export HADOOP_COMMON_HOME=/opt/modules/hadoop-2.7.6
export HADOOP_MAPRED_HOME=/opt/modules/hadoop-2.7.6
#export HBASE_HOME=
#export HIVE_HOME=

export ZOOCFGDIR=/opt/modules/zookeeper-3.4.12


6. 下载地址:

(1)hadoop2.7.6:http://mirrors.hust.edu.cn/apache/

(hadoop-2.6.0-cdh5.9.3.tar.gz下载地址:http://archive.cloudera.com/cdh5/cdh/5/)

(2)sqoop下载地址:http://mirrors.hust.edu.cn/apache/sqoop/

(3)zookeeper下载地址:https://www.apache.org/dyn/closer.cgi/zookeeper/

(4)JDK下载地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html

(5)storm下载地址:http://storm.apache.org/downloads.html


7. sqoop导入Mysql到hdfs其他方式

 (1)指定分隔符和导入路径

sqoop import   \
--connect jdbc:mysql://hadoop1:3306/mysql   \
--username root  \
--password root   \
--table help_keyword   \
--target-dir /user/hadoop11/my_help_keyword1  \
--fields-terminated-by '\t'  \

-m 2

(2)带where条件

sqoop import   \
--connect jdbc:mysql://hadoop1:3306/mysql   \
--username root  \
--password root   \
--where "name='STRING' " \
--table help_keyword   \
--target-dir /sqoop/hadoop11/myoutport1  \

-m 1

(3)查询指定列

sqoop import   \
--connect jdbc:mysql://hadoop1:3306/mysql   \
--username root  \
--password root   \
--columns "name" \
--where "name='STRING' " \
--table help_keyword  \
--target-dir /sqoop/hadoop11/myoutport22  \
-m 1

selct name from help_keyword where name = "string"

(4)指定自定义查询SQL

sqoop import   \
--connect jdbc:mysql://hadoop1:3306/  \
--username root  \
--password root   \
--target-dir /user/hadoop/myimport33_1  \
--query 'select help_keyword_id,name from mysql.help_keyword where $CONDITIONS and name = "STRING"' \
--split-by  help_keyword_id \
--fields-terminated-by '\t'  \
-m 4

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值