环境:Win10 64位 + Java8 + Hadoop2.7.5
为什么不安装Java9?
因为装完了Hadoop,下一步要装Spark,而Spark适配的Scala版本是2.11,而Scala2.11适配Java8。
1. 安装Java8并配置
- 下载安装
上官网http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html下载64位的jdk8,双击exe文件执行安装。(安装目录最好不要带空格) - 配置环境变量
- 新建环境变量JAVA_HOME,值为jdk安装目录(我的是C:\Install\jdk1.8.0_161)
- 新建环境变量JRE_HOME,值为jre安装目录(我的是C:\Install\jre)
- 新建环境变量CLASSPATH,值为.;%JAVA_HOME%\lib\dt.jar;%JAVA_HOME%\lib\tools.jar
- 在环境变量path里添加%JAVA_HOME%\bin和%JRE_HOME%\bin
- 最后在终端输入java -version,出现类似如下结果即可:
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
2. 安装Hadoop并配置
1.官网http://hadoop.apache.org/releases.html下载Hadoop2.7.5,解压至安装目录
2.上https://pan.baidu.com/s/1_C4TALLBW9TbIrJMC4JKtA 密码: dc63 下载一个支持在windows运行hadoop的工具,解压,用解压后的bin目录覆盖hadoop的bin目录(亲测这个工具适用于hadoop2.5和2.7,其他的尚且不知)
3.新建环境变量HADOOP_HOME, 值为hadoop安装目录(我这里是C:\Install\hadoop-2.7.5)
4.将%HADOOP_HOME\bin%添加至环境变量path
5.在C:/Install/hadoop-2.7.5目录新建workplace目录,进入workplace,新建子文件夹temp、data、name
6.修改hadoop配置文件
编辑C:\Install\hadoop-2.7.5\etc\hadoop\core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/C:/Install/hadoop-2.7.5/workplace/temp</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/C:/Install/hadoop-2.7.5/workplace/name</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
编辑C:\Install\hadoop-2.7.5\etc\hadoop\mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
</configuration>
编辑C:\Install\hadoop-2.7.5\etc\hadoop\hdfs-site.xml
<configuration>
<!-- 这个参数设置为1,因为是单机版hadoop -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/D:/dev/hadoop-2.5.2/workplace/data</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/C:/Install/hadoop-2.7.5/workplace/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/C:/Install/hadoop-2.7.5/workplace/data</value>
</property>
</configuration>
编辑C:\Install\hadoop-2.7.5\etc\hadoop\yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
编辑C:\Install\hadoop-2.7.5\etc\hadoop\hadoop-env.cmd
找到set JAVA_HOME=%JAVA_HOME%, 将%JAVA_HOME%替换为C:\Install\jdk1.8.0_161
7.终端运行hdfs namenode -format,出现类似INFO util.ExitUtil: Exiting with status 0则说明没有错
8.切换到hadoop的sbin目录,执行start-all.cmd
9.终端输入jps,看到如下的结果即可
C:\Users\xiligey>jps
8496
10840 ResourceManager
16168 Jps
10604 NameNode
11084 DataNode
11180 NodeManager
3. web ui界面
http://localhost:8088/cluster
http://localhost:50070/dfshealth.html#tab-overview
转载:本文非原创,参考自https://blog.csdn.net/antgan/article/details/52067441,感谢作者