问题一:
[root@hadoop-1 hadoop]# hdfs dfs -appendToFile hdfs-site.xml /p1
18/12/08 00:31:59 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java
classes where applicable appendToFile: Failed to APPEND_FILE /p1
for DFSClient_NONMAPREDUCE_985284284_1 on 192.168.137.101
because lease recovery is in progress. Try again later.
[root@hadoop-1 hadoop]#
问题二:
Exception in thread "main" java.io.IOException: Failed to replace a bad
datanode on the existing pipeline due to no more good datanodes being
available to try. (Nodes: current=[10.10.22.17:50010, 10.10.22.18:50010],
original=[10.10.22.17:50010, 10.10.22.18:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
findNewDatanode(DFSOutputStream.java:960)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
addDatanode2ExistingPipeline(DFSOutputStream.java:1026)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
setupPipelineForAppendOrRecovery(DFSOutputStream.java:1175)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.
run(DFSOutputStream.java:531)
解决办法
hadoop 解压路径下 / etc/ hadoop / hdfs-stie.xml
-
vim hdfs-stie.xml
<!-- appendToFile追加 --> <property> <name>dfs.support.append</name> <value>true</value> </property> <property> <name>dfs.client.block.write.replace-datanode-on-failure.policy</name> <value>NEVER</value> </property> <property> <name>dfs.client.block.write.replace-datanode-on-failure.enable</name> <value>true</value> </property>
添加 对应的配置属性
-
停止服务 重新启动服务 重新追加 可以成功
-
在客户端的代码里面加入: 如果是客户端 java代码 可以添加如下内容
-
conf = new Configuration(); conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER" ); conf.set("dfs.client.block.write.replace-datanode-on-failure.enable", "true" );