前提:安装好jdk.
2. 新建hadoop用户,名字可以自己取,比如hadooper。
3. 切换到hadoop用户下,将安装包放到hadoop用户根目录下。
4. 修改conf/core-site.xml, conf/hdfs-site.xml, conf/mapred-site.xml
4.1. conf/core-site.xml:
<property>
<name>fs.default.name</name><value>hdfs://localhost:9000</value>
</property>
4.2. conf/mapred-site.xml:
<property>
<name>mapred.job.tracker</name><value>localhost:9001</value>
</property>
4.3 conf/hdfs-site.xml:<property><name>dfs.replication</name><value>1</value></property>
master和slave文件内容都写上localhost
5. Format a new distributed-filesystem:$ bin/hadoop namenode -format
6. Start the hadoop daemons:$ bin/start-all.sh
7. Run some of the examples provided:$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'如果遇到:Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refuse
则关闭 iptables: service iptables stop
8. Examine the output files:
Copy the output files from the distributed filesystem to the local filesytem and examine them:
$ bin/hadoop fs -get output output
$ cat output/*or
View the output files on the distributed filesystem:
$ bin/hadoop fs -cat output/*When you're done, stop the daemons with:
$ bin/stop-all.sh