1.首先在linux上安装ssh服务,请查看我的安装ssh文章
2.接着安装hadoop
1>从网上下载hadooptar包 下载地址
2>用命令
tar -zxvf /下载路径/tar包
3>配置conf文件夹下的配置文件
hadoop-env.sh
增加jdk的安装路径
export JAVA_HOME=/usr/java/jdk-7
如果路径有空格,就用单引号把路径
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://xxx.xxx.xx.xxx:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop/hadoop-${user.name}</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>xxx.xxx.xxx.xxx:9001</value>
</property>
</configuration>
masters slaves 把这个文件中的localhost改成电脑的ip
4>格式化namenode
进入到hadoop目录
bin/hadoop namenode -format
bin/hadoop start-all.sh
jps查看是否启动成功
13815 Jps
13762 TaskTracker
7409 SecondaryNameNode
7493 JobTracker
13236 NameNode
13454 DataNode
5>下载myeclipse连接hadoop的插件
放到dropins目录下重启myeclipse
6>创建DFS Location
OK!!!