[sqoop][mysql导入到hadoop]ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already

报错1:
17/03/29 11:18:53 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/bde23e1bdcb658a74784c760aeca9881/fda_djml.jar
17/03/29 11:18:53 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
17/03/29 11:18:53 INFO mapreduce.ImportJobBase: Beginning import of fda_djml
17/03/29 11:18:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/29 11:18:53 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/03/29 11:18:54 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/03/29 11:18:54 INFO client.RMProxy: Connecting to ResourceManager at master/10.211.55.10:8032
17/03/29 11:18:56 INFO db.DBInputFormat: Using read commited transaction isolation
17/03/29 11:18:56 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`djguid`), MAX(`djguid`) FROM fda_djml
17/03/29 11:18:56 WARN db.TextSplitter: Generating splits for a textual index column.
17/03/29 11:18:56 WARN db.TextSplitter: If your database sorts in a case-insensitive order, this may result in a partial import or duplicate records.
17/03/29 11:18:56 WARN db.TextSplitter: You are strongly encouraged to choose an integral split column.
17/03/29 11:18:56 INFO mapreduce.JobSubmitter: number of splits:6
17/03/29 11:18:56 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1490572354639_0017
17/03/29 11:18:56 INFO impl.YarnClientImpl: Submitted application application_1490572354639_0017
17/03/29 11:18:57 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1490572354639_0017/
17/03/29 11:18:57 INFO mapreduce.Job: Running job: job_1490572354639_0017
17/03/29 11:19:09 INFO mapreduce.Job: Job job_1490572354639_0017 running in uber mode : false
17/03/29 11:19:09 INFO mapreduce.Job:  map 0% reduce 0%
17/03/29 11:19:16 INFO mapreduce.Job:  map 33% reduce 0%
17/03/29 11:19:23 INFO mapreduce.Job:  map 100% reduce 0%
17/03/29 11:19:28 INFO ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
17/03/29 11:19:29 INFO ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
17/03/29 11:19:30 INFO ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)

报错2:

2017-03-29 11:19:22,038 ERROR [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception while unregistering 
java.lang.NullPointerException
	at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.getApplicationWebURLOnJHSWithoutScheme(MRWebAppUtil.java:135)
	at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.getApplicationWebURLOnJHSWithScheme(MRWebAppUtil.java:150)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.doUnregistration(RMCommunicator.java:212)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.unregister(RMCommunicator.java:182)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStop(RMCommunicator.java:255)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStop(RMContainerAllocator.java:272)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
	at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStop(MRAppMaster.java:821)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
	at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
	at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
	at org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
	at org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1497)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1094)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.shutDownJob(MRAppMaster.java:556)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler$1.run(MRAppMaster.java:603)
2017-03-29 11:19:22,040 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:4 AssignedReds:0 CompletedMaps:6 CompletedReds:0 ContAlloc:9 ContRel:3 HostLocal:0 RackLocal:0
2017-03-29 11:19:22,040 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Skipping cleaning up the staging dir. assuming AM will be retried.
2017-03-29 11:19:22,040 INFO [Thread-68] org.apache.hadoop.ipc.Server: Stopping server on 53611

解决办法:

修改mapred-site.xml里面的配置,注意

mapreduce.jobhistory.address和
mapreduce.jobhistory.webapp.address配置正确,以及加入
mapreduce.application.classpath的配置

 
 

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

<property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
</property>

<property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
</property>

<property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>/mr-history/tmp</value>
</property>

<property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>/mr-history/done</value>
</property>
<property>
       <name>mapreduce.application.classpath</name>
       <value>
            /opt/hadoop-2.5.2/etc/hadoop,
            /opt/hadoop-2.5.2/share/hadoop/common/*,
            /opt/hadoop-2.5.2/share/hadoop/common/lib/*,
            /opt/hadoop-2.5.2/share/hadoop/hdfs/*,
            /opt/hadoop-2.5.2/share/hadoop/hdfs/lib/*,
            /opt/hadoop-2.5.2/share/hadoop/mapreduce/*,
            /opt/hadoop-2.5.2/share/hadoop/mapreduce/lib/*,
            /opt/hadoop-2.5.2/share/hadoop/yarn/*,
            /opt/hadoop-2.5.2/share/hadoop/yarn/lib/*
       </value>
</property>
</configuration>





评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值