我是新来的hadoop &在初始阶段挣扎。 在eclipse中我写了word count程序并为wordcount程序创建了JAR。Windows上的Hadoop:得到异常“不是有效的DFS文件名”
我尝试使用下面的Hadoop命令来运行它:
$ ./hadoop jar C:/cygwin64/home/PAKU/hadoop-1.2.1/wordcount.jar com.hadoopexpert.WordCountDriver file:///C:/cygwin64/home/PAKU/work/hadoopdata/tmp/dfs/ddata/file.txt file:///C:/cygwin64/home/PAKU/hadoop-dir/datadir/tmp/output
而且,我发现了异常,如:
Exception in thread "main" java.lang.IllegalArgumentException: Pathname /C:/cygwin64/home/PAKU/work/hadoopdata/tmp/mapred/staging/PAKU/.staging from hdfs://localhost:50000/C:/cygwin64/home/PAKU/work/hadoopdata/tmp/mapred/staging/PAKU/.staging is not a valid DFS filename.
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:143)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:554)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:788)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:109)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at com.hadoopexpert.WordCountDriver.main(WordCountDriver.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
注:我使用Cygwin的Windows上运行的Hadoop 。
代码:
public class WordCountDriver {
public static void main(String[] args) {
try {
Job job = new Job();
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setJarByClass(WordCountDriver.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
try {
System.exit(job.waitForCompletion(true) ? 0 :-1);
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
public class WordCountReducer extends Reducer{
public void reduce(Text key, Iterable value, Context context){
int total = 0;
while(value.iterator().hasNext()){
IntWritable i = value.iterator().next();
int i1= i.get();
total += i1;
}
try {
context.write(key, new IntWritable(total));
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public class WordCountMapper extends Mapper{
public void map(LongWritable key, Text value, Context context){
String s = value.toString();
for(String word :s.split(" ")){
Text text = new Text(word);
IntWritable intW = new IntWritable(1);
try {
context.write(text, intW);
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
任何人可以帮助我跑我的第一个Hadoop的程序。
在此先感谢。
2016-12-24
PKH
+0
放置代码。我认为你在'main'中指定了一个无效的路径。 –
+0
@AniMenon - 我添加了代码。你可以请帮忙 –
+0
@AniMenon - 如何通过命令行获取HDFS位置? –