hadoop不同版本之间的集群复制。
1.基础
使用hadoop distcp 来进行集群间的数据复制。
2.实战
如果两个集群之间版本不一样,应该这样来复制。
hadoop distcp hftp://source/source hdfs://dest:9000/source
为啥是hftp呢,因为不同版本rpc不太一样。
如果一个集群闲置,尽量使用它的MR能力。有一个要注意,在这个集群里
目标nn尽量采用ip地址。为啥,你试试就知道,用hostname多麻烦了。
还遇到这个错,引以为鉴。
- ERROR org.apache.hadoop.mapred.JobHistory: Failed creating job history log file for job job_201308261457_000
- 5
- java.net.UnknownHostException: unknown host: namenode2
- at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:216)
- at org.apache.hadoop.ipc.Client.getConnection(Client.java:1154)
- at org.apache.hadoop.ipc.Client.call(Client.java:1010)
- at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
- at $Proxy5.getProtocolVersion(Unknown Source)
- at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:364)
- at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
- at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:208)
- at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:175)
- at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
- at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1310)
如果文件过大,过多,不建议直接复制,建议采用脚本文件遍历目录,导出单个复制小目录的批命令。
python脚本如下(临时脚本,未优化,请保函)
- #!/usr/bin/env python
- import sys, os, time, atexit, string
- from signal import SIGTERM
- def list_dir(exec_cmd,s_dir):
- tmp=[]
- tmps = os.popen(exec_cmd +s_dir).read().strip().split("\n")
- for tmp_sline in tmps :
- if "-" in tmp_sline:
- sitem=tmp_sline.split(" ")
- tmp.append(sitem)
- return tmp
- dir="/data/dw/"
- cmd="/opt/modules/hadoop/hadoop-0.20.203.0/bin/hadoop fs -ls hdfs://192.168.1.99:9000"#目标集群
- cmd2="/opt/modules/hadoop/hadoop-0.20.203.0/bin/hadoop fs -ls hdfs://source:9000"#源集群
- lines = list_dir(cmd2,dir)#sort
- for line in lines :
- if "-" in line:
- slines= list_dir(cmd2,line[-1])#year
- print "/opt/modules/hadoop/hadoop-0.20.203.0/bin/hadoop fs -mkdir hdfs://192.168.1.99:9000%s"%line[-1].replace("data/","data/
- sbak/") #复制前建上级目录
- for sline in slines :
- if "-" in sline:
- print "/opt/modules/hadoop/hadoop-0.20.203.0/bin/hadoop distcp hftp://source:50070%s hdfs://192.168.1.
- 99:9000%s"%(sline[-1],line[-1].replace("data/","data/sbak/"))
下面是原创
14/07/30 16:47:20 INFO mapreduce.Job: map 0% reduce 0%
14/07/30 16:47:33 INFO mapreduce.Job: map 100% reduce 0%
14/07/30 16:47:34 INFO mapreduce.Job: Task Id : attempt_1406708172793_0010_m_000000_1, Status : FAILED
Error: java.io.IOException: File copy failed: hftp://hadoop11/in/word --> hdfs://192.168.1.129:9000/in3/word
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:262)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hftp://hadoop11/in/word to hdfs://192.168.1.129:9000/in3/word
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
... 10 more
Caused by: java.io.IOException: Check-sum mismatch between hftp://hadoop11/in/word and hdfs://192.168.1.129:9000/in3/.distcp.tmp.attempt_1406708172793_0010_m_000000_1. Source and target differ in block-size. Use -pb to preserve block-sizes during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. (NOTE: By skipping checksums, one runs the risk of masking data-corruption during file-transfer.)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:190)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:125)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:95)
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... 11 more
解决方式:
<name>dfs.checksum.type</name>
<value>CRC32<value>
</property>
mac@vip.126.com 2014/7/30 17:04:00
source:50070 dest: 9000