windows编译安装hadoop

出了新的hadoop,我也就尝试一下编译安装,下面是我的过程:

参考文章Build, Install, Configure and Run Apache Hadoop 2.2.0 in Microsoft Windows OS(转载请保留链接)

1.准备:

    a.下载hadoop-2.5.0-src.tar.gz

    b.windows系统,这是必须的(我是win8.1)

    c.Windows SDK 或者 Visual Studio 2010(官方说的,不过我用的是vs2013)

    d.Maven 3.0以上

    e.Protocol Buffers 2.5.0

    f.Cygwin

    g.JDK 1.6+

    h.畅通无阻的网络(我网络差,出错好几次)

    i.CMake 2.6以上

2.安装好上面的,安装都不是很难,大部分直接安装就好了。MavenProtocol Buffers 2.5.0直接解压的一个地方,路径无空格,且不能过长。

3.添加系统环境变量JAVA_HOME, M2_HOMEPlatform(这个主意大小写)。值为各根路径。Platform值为x64(电脑为64位)或者Win32(电脑为32位)

4.修改Path,添加

;%JAVA_HOME%/bin;%M2_HOME%/bin;D:/protobuf(这是protocol buffers路径);D:\cygwin64\bin(这是cygwin路径)
5.将下载好的 hadoop-2.5.0-src.tar.gz解压到某个位置(要求同上)(例如:c:\hdfs)

6.运行Windows SDK 7.1 Command Prompt,我用的是vs2013,所以是VS2013 开发人员命令提示

7.去到hadoop目录下,执行下面的命令(网络不稳定可能会出错,再试试看吧...我不太懂得)

cd c:\hdfs
mvn package -Pdist,native-win -DskipTests -Dtar

*******************

常见错误

    1)[ERROR] around Ant part ...sh 

        cygwin的path设置错误,检查一下吧

    2)[ERROR] * @param output List<String> .....未知符号....

        你用的是jdk1.8吧,上面命令改成这个吧

mvn package -Pdist,native-win -DskipTests -Dtar -Dadditionalparam=-Xdoclint:none

   3)\hdfs\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj”(默认目标)的操作 - 失败。

        因为官方要求用2010的,如果你是12或以上,用vs打开该文件,重载就可以了

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [  1.283 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [  1.166 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [  2.847 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.291 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [  2.096 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [  3.487 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [  2.064 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [  3.196 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [  4.110 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [01:47 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [  7.241 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.050 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [02:25 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 24.210 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [  4.344 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  3.295 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.042 s]
[INFO] hadoop-yarn ........................................ SUCCESS [  0.041 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [ 36.408 s]
[INFO] hadoop-yarn-common ................................. SUCCESS [01:08 min]
[INFO] hadoop-yarn-server ................................. SUCCESS [  0.048 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 20.763 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [01:53 min]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  3.418 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [  6.301 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 36.235 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [  1.510 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [  7.688 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [  0.042 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  3.780 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  2.559 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [  0.204 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [  4.803 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.110 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 42.894 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 18.299 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  4.013 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 16.918 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [  9.512 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 18.007 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  1.972 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  7.123 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [  2.700 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [  8.957 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 19.241 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [  2.471 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [  7.488 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  6.371 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  2.850 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  3.182 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [  0.035 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [  6.030 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [  6.629 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.105 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 13.247 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [  5.549 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.034 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 31.373 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14:15 min
[INFO] Finished at: 2014-08-14T08:04:37+08:00
[INFO] Final Memory: 160M/505M
[INFO] ------------------------------------------------------------------------
D:\hdfs>

出现上面这个就说明成功了一大部分了。

8.去到\hadoop-dist\target 出现一个hadoop-2.5.0.tar.gz那就对了。

9.安装hadoop,把hadoop-2.5.0.tar.gz解压到一目录

10.修改\etc\hadoop\core-site.xml,在<configuration>标签里添加

<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>

'fs.defaultFS:
The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.'

11.修改\etc\hadoop\hdfs-site.xml,在<configuration>标签里添加
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/hadoop/data/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/hadoop/data/dfs/datanode</value>
</property>
' dfs.replication:
Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
dfs.namenode.name.dir:
Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.datanode.data.dir:
Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored. '
12.修改\etc\hadoop\yarn-site.xml,在<configuration>标签里添加
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>
%HADOOP_HOME%\etc\hadoop,
%HADOOP_HOME%\share\hadoop\common\*,
%HADOOP_HOME%\share\hadoop\common\lib\*,
%HADOOP_HOME%\share\hadoop\mapreduce\*,
%HADOOP_HOME%\share\hadoop\mapreduce\lib\*,
%HADOOP_HOME%\share\hadoop\hdfs\*,
%HADOOP_HOME%\share\hadoop\hdfs\lib\*,          
%HADOOP_HOME%\share\hadoop\yarn\*,
%HADOOP_HOME%\share\hadoop\yarn\lib\*
</value>
</property>
' yarn.nodemanager.aux-services:
The auxiliary service name. Default value is omapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class:
The auxiliary service class to use. Default value is org.apache.hadoop.mapred.ShuffleHandler
yarn.application.classpath:
CLASSPATH for YARN applications. A comma-separated list of CLASSPATH entries. '

13.修改\etc\hadoop\mapred-site.xml,在<configuration>标签里添加

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
' mapreduce.framework.name:
The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn. '
14.第一次需要格式化namenode

cmd去到hadoop目录下的bin目录,执行

hdfs namenode -format
出现

14/08/14 08:27:49 INFO common.Storage: Storage directory \hadoop\data\dfs\namenode has been <span style="background-color: rgb(255, 255, 51);">successfully formatted.</span>
证明成功了

15.开启hdfs

cd ..
cd sbin
<span style="font-family: Consolas, 'Bitstream Vera Sans Mono', 'Courier New', Courier, monospace; font-size: 13px; line-height: 14.300000190734863px; white-space: pre; background-color: rgb(224, 224, 224);">start-dfs</span>

16.开启MapReduce和YARN

start-yarn
starting yarn daemons

(或者直接)

start-all
一共出现4个窗口

17.浏览器打开测试一下吧,

Node Manager: http://localhost:8042/

Namenode: http://localhost:50070

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值