参照hadoop-0.20.2/docs/quickstart.html
注:ssh-copy-id -i ~/.ssh/id_rsa.pub localhost,我的 用户名是fansxnet
配置我们的hadoop伪分布式,打开下面的页面,配置成功。
- NameNode - http://localhost:50070/
- JobTracker - http://localhost:50030/
mapreduce 9001
hdfs 9000
添加hadoop的eclipse的插件并重起eclipse。
hadoop-0.20.2/contrib/eclipse-plugin/hadoop-0.20.2-eclipse-plugin.jar
配置插件
![](http://dl.iteye.com/upload/attachment/0064/0369/0e3f9807-7fe1-3ea9-80c6-38a22efacbff.png)
打开Map/Reduce Locations视图,New hadoop Locations
![](http://dl.iteye.com/upload/attachment/0064/0358/04aa764c-76c9-3238-b9d8-9ad6bdbbed8b.png)
完成之后,就可以看到我们的分布式文件
![](http://dl.iteye.com/upload/attachment/0064/0360/1abfa514-e4ee-39a9-8856-6aed1c54d93b.png)
新建我们的mapreduce项目hadoop
将hadoop自带的示例程序hadoop-0.20.2/src/examples/复制到我们的项目src目录下
本地新建一个input目录,里面新建两个文件
file1,内容如下
Hello World Bye World
file2,内容如下
Hello Hadoop Goodbye Hadoop
上传input文件夹到hdfs://localhost:9000/user/fansxnet/目录下
![](http://dl.iteye.com/upload/attachment/0064/0363/fd808695-2043-3481-82ad-3831c14a72e5.png)
运行org.apache.hadoop.examples.WordCount.java
指定输入输出文件夹
hdfs://localhost:9000/user/fansxnet/input hdfs://localhost:9000/user/fansxnet/output
![](http://dl.iteye.com/upload/attachment/0064/0365/b3bd509c-1609-338c-85f8-c17d8e9a6ffa.png)
运行完之后刷新hdfs文件夹即可看到统计结果
![](http://dl.iteye.com/upload/attachment/0064/0367/20e64a22-71ea-356f-a5c0-49f59644b453.png)