问题描述:
使用hdfs文件append的方式合并文件:
FSDataOutputStream out = fileSystem.append(desPath, Parameters.BUFFER_SIZE);
在out close时报错:
Java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try.
(Nodes: current=[10.6.222.13:50010], original=[10.6.222.13:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
解决方式:
在客户端的hdfs-site.xml中增加配置项:
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>