hdfs java 追加,java – 如何在非常小的集群上附加一个hdfs文件(3个节点或更少)

我试图附加到单个节点集群上的hdfs上的文件.我也尝试了一个2节点集群,但得到相同的例外.

在hdfs-site中,我将dfs.replication设置为1.如果我将dfs.client.block.write.replace-datanode-on-failure.policy设置为DEFAULT,我得到以下异常

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.10.37.16:50010],original=[10.10.37.16:50010]). The current Failed datanode replacement policy is DEFAULT,and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

如果我按照configuration in hdfs-default.xml的评论为非常小的集群(3个节点或更少)的建议,并设置dfs.client.block.write.replace-datanode-on-failure.policy,以至于我得不到以下异常:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot append to file/user/hadoop/test. Name node is in safe mode.

The reported blocks 1277 has reached the threshold 1.0000 of total blocks 1277. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 3 seconds.

这是我如何追加:

Configuration conf = new Configuration();

conf.set("fs.defaultFS","hdfs://MY-MACHINE:8020/user/hadoop");

conf.set("hadoop.job.ugi","hadoop");

FileSystem fs = FileSystem.get(conf);

OutputStream out = fs.append(new Path("/user/hadoop/test"));

PrintWriter writer = new PrintWriter(out);

writer.print("hello world");

writer.close();

有什么我在代码中做错了吗?

也许,在配置中有些东西丢失?

任何帮助将不胜感激!

编辑

即使dfs.replication设置为1,当我检查文件的状态

FileStatus[] status = fs.listStatus(new Path("/user/hadoop"));

我发现状态[i] .block_replication设置为3.我不认为这是问题,因为当我将dfs.replication的值更改为0时,我得到了一个相关的异常.所以显然它确实服从dfs.replication的价值,但是要安全起见,是否有办法更改每个文件的block_replication值?

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值