按照《再ubuntu里面安装hadoop记录》安装好了伪分布式的Hadoop之后,下面安装伪分布式的spark
先下载scala,这次用的是2.11.5版本的,下载链接:
http://www.scala-lang.org/download/2.11.5.html
下载scala-2.11.5.tgz
下载之后解压并放到/home/k/software/,之后把这个路径添加到环境变量里面,这次是添加到了/etc/profile这个文件里面,添加的是下面这两行:
export SCALA_HOME=/home/k/software/scala-2.11.5
export PATH=${SCALA_HOME}/bin:$PATH
添加了之后使之生效,source /etc/profile
下载spark,这次使用的版本是2.0.2,下载地址:http://spark.apache.org/downloads.html
下载之后先解压,然后放到/home/k/software/spark-2.0.2-bin-hadoop2.7路径下面
把spark的路径添加到环境变量里面,这次是写到了/etc/profile这个文件里面,添加的是下面这两行:
export SPARK_HOME=/home/k/software/spark-2.0.2-bin-hadoop2.7
export PATH=${SPARK_HOME}/bin:$PATH
找到/home/k/software/spark-2.0.2-bin-hadoop2.7/conf/spark-env.sh这个文件,在里面添加了下面这几行:
export JAVA_HOME=/home/k/software/jdk1.8.0_111
export SCALA_HOME=/home/k/software/scala-2.11.5
export HADOOP_HOME=/home/k/software/hadoop-2.7.3
export HADOOP_CONF_DIR=/home/k/software/hadoop-2.7.3/etc/hadoop
export SPARK_MASTER_IP=localhost
export SPARK_WORKER_MEMORY=512M
如果使用jps命令时,看到NodeManager,DataNode,Jps,Resourcemanager,NameNode,SecondaryNameNode这几个进程都在,就使用spark目录下的sbin/start-all.sh启动spark,如果启动成功的话,使用jps命令应该可以看到下面这8个进程:
NodeManager
Master
DataNode
ResourceManager
Jps
Worker
NameNode
SecondaryNameNode