In a java app running on an edge node, I need to delete a hdfs folder, if it exists. I need to do that before running a mapreduce job (with spark) that output in the folder.
I found I could use the method
org.apache.hadoop.fs.FileUtil.fullyDelete(new File(url))
However, I can only make it work with local folder (i.e. file url on the running computer). I tried to use something like:
url = "hdfs://hdfshost:port/the/folder/to/delete";
with hdfs://hdfshost:port being the hdfs namenode IPC. I use it for the mapreduce, so it is correct.
However it doesn't do anything.
So, what url should I use, or is there another method?
Note: here is the simple project in question.
解决方案
I do it this way:
Configuration conf = new Configuration();
conf.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conf.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());
FileSystem hdfs = FileSystem.get(URI.create("hdfs://:"), conf);
hdfs.delete("/path/to/your/file", isRecusrive);
you don't need hdfs://hdfshost:port/ in your file path