- Centos7
- hadoop-2.4.0-src.tar.gz
- jdk-7u71-linux-x64.tar.gz
- scala-2.10.4.tgz
- spark-1.2.0-bin-hadoop2.4.tgz
Spark的开发环境,本文选择Windows7平台,IDE选择IntelliJ IDEA。在Windows中,需要安装以下软件:
1. 安装JDK
解压jdk安装包到/usr/lib目录:
1 sudo cp jdk-7u71-linux-x64.gz /usr/lib
2 cd /usr/lib
3 sudo tar -xvzf jdk-7u71-linux-x64.gz
4 sudo gedit /etc/profile
在/etc/profile文件的末尾添加环境变量:
1 export JAVA_HOME=/usr/lib/jdk1.7.0_71 2 export JRE_HOME=/usr/lib/jdk1.7.0_71/jre 3 export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH 4 export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
保存并更新/etc/profile:
1 source /etc/profile
测试jdk是否安装成功:
1 java -version
2. 安装及配置SSH
1 sudo apt-get update
2 sudo apt-get install openssh-server
3 sudo /etc/init.d/ssh start
生成并添加密钥:
1 ssh-keygen -t rsa -P "" 2 cd /home/hduser/.ssh 3 cat id_rsa.pub >> authorized_keys
ssh登录:
1 ssh localhost
3. 安装hadoop2.4.0
采用伪分布模式安装hadoop2.4.0。解压hadoop2.4.0到/usr/local目录:
1、编译前的准备(maven)
maven官方下载地址,可以选择源码编码安装,这里就直接下载编译好的 就可以了
- wget http://mirror.bit.edu.cn/apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.zip
解压文件后,同样在/etc/profie里配置环境变量
- export MAVEN_HOME=/opt/maven3.1.1
- export PATH=$PATH:$MAVEN_HOME/bin
验证配置是否成功: mvn -version
- Apache Maven 3.1.1 (0728685237757ffbf44136acec0402957f723d9a; 2015-03-11 23:22:22+0800)
- Maven home: /opt/maven3.1.1
- Java version: 1.7.0_71, vendor: Oracle Corporation
- Java home: /opt/jdk1.7/jre
- Default locale: en_US, platform encoding: UTF-8
- OS name: "linux", version: "2.6.32-358.el6.x86_64", arch: "amd64", family: "unix"
2、编译hadoop首先官方下载hadoop源码
- wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.4.0/hadoop-2.4.0-src.tar.gz
如果是32bit的机器,可以直接下载官方已经编译好的包,64bit的机器需要自行编译。 在maven目录下,conf/settings.xml,在<mirrors></mirros>里添加,原本的不要动
- <mirror>
- <id>nexus-osc</id>
- <mirrorOf>*</mirrorOf>
- <name>Nexusosc</name>
- <url>http://maven.oschina.net/content/groups/public/</url>
- </mirror>
同样,在<profiles></profiles>内新添加
- <profile>
- <id>jdk-1.7</id>
- <activation>
- <jdk>1.7</jdk>
- </activation>
- <repositories>
- <repository>
- <id>nexus</id>
- <name>local private nexus</name>
- <url>http://maven.oschina.net/content/groups/public/</url>
- <releases>
- <enabled>true</enabled>
- </releases>
- <snapshots>
- <enabled>false</enabled>
- </snapshots>
- </repository>
- </repositories>
- <pluginRepositories>
- <pluginRepository>
- <id>nexus</id>
- <name>local private nexus</name>
- <url>http://maven.oschina.net/content/groups/public/</url>
- <releases>
- <enabled>true</enabled>
- </releases>
- <snapshots>
- <enabled>false</enabled>
- </snapshots>
- </pluginRepository>
- </pluginRepositories>
- </profile>
编译clean
- cd hadoop2.4.0-src
- mvn clean install –DskipTests
发现异常
- [ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 'protoc --version' did not return a version -> [Help 1]
- [ERROR]
- [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
- [ERROR] Re-run Maven using the -X switch to enable full debug logging.
- [ERROR]
- [ERROR] For more information about the errors and possible solutions, please read the following articles:
- [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
- [ERROR]
- [ERROR] After correcting the problems, you can resume the build with the command
- [ERROR] mvn <goals> -rf :hadoop-common
hadoop2.4.0编译需要protoc2.5.0的支持,所以还要下载protoc,下载地址:https://code.google.com/p/protobuf/downloads/list,要下载2.5.0版本对protoc进行编译安装前先要装几个依赖包:gcc,gcc-c++,make 如果已经安装的可以忽略
- yum install gcc
- yum intall gcc-c++
- yum install make
安装protoc
- tar -xvf protobuf-2.5.0.tar.bz2
- cd protobuf-2.5.0
- ./configure --prefix=/opt/protoc/
- make && make install
安装完配置下环境变量,跟上面过程一样。需要安装cmake,openssl-devel,ncurses-devel依赖
- yum install cmake
- yum install openssl-devel
- yum install ncurses-devel
可以进行编译了,
- mvn package -Pdist,native -DskipTests -Dtar
时间很长跟网速有关
- [INFO] ------------------------------------------------------------------------
- [INFO] Reactor Summary:
- [INFO]
- [INFO] Apache Hadoop Main ................................ SUCCESS [3.709s]
- [INFO] Apache Hadoop Project POM ......................... SUCCESS [2.229s]
- [INFO] Apache Hadoop Annotations ......................... SUCCESS [5.270s]
- [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.388s]
- [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [3.485s]
- [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [8.655s]
- [INFO] Apache Hadoop Auth ................................ SUCCESS [7.782s]
- [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [5.731s]
- [INFO] Apache Hadoop Common .............................. SUCCESS [1:52.476s]
- [INFO] Apache Hadoop NFS ................................. SUCCESS [9.935s]
- [INFO] Apache Hadoop Common Project ...................... SUCCESS [0.110s]
- [INFO] Apache Hadoop HDFS ................................ SUCCESS [1:58.347s]
- [INFO] Apache Hadoop HttpFS .............................. SUCCESS [26.915s]
- [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [17.002s]
- [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [5.292s]
- [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.073s]
- [INFO] hadoop-yarn ....................................... SUCCESS [0.335s]
- [INFO] hadoop-yarn-api ................................... SUCCESS [54.478s]
- [INFO] hadoop-yarn-common ................................ SUCCESS [39.215s]
- [INFO] hadoop-yarn-server ................................ SUCCESS [0.241s]
- [INFO] hadoop-yarn-server-common ......................... SUCCESS [15.601s]
- [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [21.566s]
- [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [4.754s]
- [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [20.625s]
- [INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.755s]
- [INFO] hadoop-yarn-client ................................ SUCCESS [6.748s]
- [INFO] hadoop-yarn-applications .......................... SUCCESS [0.155s]
- [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [4.661s]
- [INFO] hadoop-mapreduce-client ........................... SUCCESS [0.160s]
- [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [36.090s]
- [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [2.753s]
- [INFO] hadoop-yarn-site .................................. SUCCESS [0.151s]
- [INFO] hadoop-yarn-project ............................... SUCCESS [4.771s]
- [INFO] hadoop-mapreduce-client-common .................... SUCCESS [24.870s]
- [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [3.812s]
- [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [15.759s]
- [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [6.831s]
- [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [8.126s]
- [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [2.320s]
- [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [9.596s]
- [INFO] hadoop-mapreduce .................................. SUCCESS [3.905s]
- [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [7.118s]
- [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [11.651s]
- [INFO] Apache Hadoop Archives ............................ SUCCESS [2.671s]
- [INFO] Apache Hadoop Rumen ............................... SUCCESS [10.038s]
- [INFO] Apache Hadoop Gridmix ............................. SUCCESS [6.062s]
- [INFO] Apache Hadoop Data Join ........................... SUCCESS [4.104s]
- [INFO] Apache Hadoop Extras .............................. SUCCESS [4.210s]
- [INFO] Apache Hadoop Pipes ............................... SUCCESS [9.419s]
- [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [2.306s]
- [INFO] Apache Hadoop Tools ............................... SUCCESS [0.037s]
- [INFO] Apache Hadoop Distribution ........................ SUCCESS [21.579s]
- [INFO] Apache Hadoop Client .............................. SUCCESS [7.299s]
- [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [7.347s]
- [INFO] ------------------------------------------------------------------------
- [INFO] BUILD SUCCESS
- [INFO] ------------------------------------------------------------------------
- [INFO] Total time: 11:53.144s
- [INFO] Finished at:
- [INFO] Final Memory: 70M/239M
- [INFO] ------------------------------------------------------------------------
直到看到上面的内容那就说明编译完成了。编译后的路径在:hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0
可以看出hadoop的版本
- [root@localhost hadoop-2.4.0]# file lib//native/*
- lib//native/libhadoop.a: current ar archive
- lib//native/libhadooppipes.a: current ar archive
- lib//native/libhadoop.so: symbolic link to `libhadoop.so.1.0.0'
- lib//native/libhadoop.so.1.0.0: <span style="color:#ff0000;">ELF 64-bit LSB shared object, x86-64, version 1</span> (SYSV), dynamically linked, not stripped
- lib//native/libhadooputils.a: current ar archive
- lib//native/libhdfs.a: current ar archive
- lib//native/libhdfs.so: symbolic link to `libhdfs.so.0.0.0'
- lib//native/libhdfs.so.0.0.0: <span style="color:#ff0000;">ELF 64-bit LSB shared object, x86-64, version 1</span> (SYSV), dynamically linked, not stripped
hadoop编译成功,下面可以来部署集群。
将hadoop2.4.0放到/usr/local目录下
在/etc/profile文件的末尾添加环境变量:
1 export HADOOP_HOME=/usr/local/hadoop-2.4.0 2 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH 3 4 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 5 export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
保存并更新/etc/profile:
1 source /etc/profile
在位于/usr/local/hadoop-2.4.0/etc/hadoop的hadoop-env.sh和yarn-env.sh文件中修改jdk路径:
1 cd /usr/local/hadoop-2.4.0/etc/hadoop
2 sudo gedit hadoop-env.sh
3 sudo gedit yarn-env.sh
hadoop-env.sh:
export JAVA_HOME=/usr/lib/jdk1.7.0_71
yarn-env.sh:
export JAVA_HOME=/usr/lib/jdk1.7.0_71
修改core-site.xml:
1 sudo gedit core-site.xml
在<configuration></configuration>之间添加:
1 <property> 2 <name>fs.default.name</name> 3 <value>hdfs://localhost:9000</value> 4 </property> 5 6 <property> 7 <name>hadoop.tmp.dir</name> 8 <value>/app/hadoop/tmp</value> 9 </property>
修改hdfs-site.xml:
1 sudo gedit hdfs-site.xml
在<configuration></configuration>之间添加:
1 <property> 2 <name>dfs.namenode.name.dir</name> 3 <value>/app/hadoop/dfs/nn</value> 4 </property> 5 6 <property> 7 <name>dfs.namenode.data.dir</name> 8 <value>/app/hadoop/dfs/dn</value> 9 </property> 10 11 <property> 12 <name>dfs.replication</name> 13 <value>1</value> 14 </property>
修改yarn-site.xml:
1 sudo gedit yarn-site.xml
在<configuration></configuration>之间添加:
1 <property> 2 <name>mapreduce.framework.name</name> 3 <value>yarn</value> 4 </property> 5 6 <property> 7 <name>yarn.nodemanager.aux-services</name> 8 <value>mapreduce_shuffle</value> 9 </property>
复制并重命名mapred-site.xml.template为mapred-site.xml:
1 sudo cp mapred-site.xml.template mapred-site.xml
2 sudo gedit mapred-site.xml
在<configuration></configuration>之间添加:
1 <property> 2 <name>mapreduce.jobtracker.address </name> 3 <value>hdfs://localhost:9001</value> 4 </property>
记得要修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh,在最后加上;
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
在启动hadoop之前,为防止可能出现无法写入log的问题,记得为/app目录设置权限:
1 sudo mkdir /app
2 sudo chmod -R hduser:hduser /app
格式化hadoop:
1 hadoop namenode -format
启动hdfs和yarn。在开发Spark时,仅需要启动hdfs:
1 sbin/start-dfs.sh
2 sbin/start-yarn.sh
在浏览器中打开地址http://localhost:50070/可以查看hdfs状态信息:
4. 安装scala
1 sudo cp /home/hduser/Download/scala-2.10.0.tgz /usr/local
2 sudo tar -xvzf scala-2.10.0.tgz
在/etc/profile文件的末尾添加环境变量:
1 export SCALA_HOME=/usr/local/scala-2.10.0 2 export PATH=$SCALA_HOME/bin:$PATH
保存并更新/etc/profile:
1 source /etc/profile
测试scala是否安装成功:
1 scala -version
5. 安装Spark
1 sudo cp spark-1.2.0-bin-hadoop2.4.tgz /usr/local
2 sudo tar -xvzf spark-1.2.0-bin-hadoop2.4.tgz
在/etc/profile文件的末尾添加环境变量:
1 export SPARK_HOME=/usr/local/spark-1.2.0-bin-hadoop2.4 2 export PATH=$SPARK_HOME/bin:$PATH
保存并更新/etc/profile:
1 source /etc/profile
复制并重命名spark-env.sh.template为spark-env.sh:
1 sudo cp spark-env.sh.template spark-env.sh
2 sudo gedit spark-env.sh
在spark-env.sh中添加:
1 export SCALA_HOME=/usr/local/scala-2.10.0
2 export JAVA_HOME=/usr/lib/jdk1.7.0_71
3 export SPARK_MASTER_IP=localhost
4 export SPARK_WORKER_MEMORY=1000m
启动Spark:
1 cd /usr/local/spark-1.2.0-bin-hadoop2.4
2 sbin/start-all.sh
在spark/conf/slave下添加节点关系
测试Spark是否安装成功:
1 cd /usr/local/spark-1.2.0-bin-hadoop2.4 2 bin/run-example SparkPi