基础配置同上篇博客
配置教程请先参阅:厦门大学数据库实验室系列博客
Spark 2.0分布式集群环境搭建
需要注意的配置有两个
cd /usr/local/spark/
cp ./conf/slaves.template ./conf/slaves
#slaves文件设置Worker节点。编辑slaves内容,把默认内容localhost替换成如下内容:
slave1
slave2
配置spark-env.sh文件
将 spark-env.sh.template 拷贝到 spark-env.sh
cp ./conf/spark-env.sh.template ./conf/spark-env.sh
编辑spark-env.sh,添加如下内容:
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_MASTER_IP=192.168.137.129
配置完成后启动master:
lockey@master:/usr/local$ ./spark/sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-lockey-org.apache.spark.deploy.master.Master-1-master.out
lockey@ubuntu-lockey:/usr/local/spark$ jps
16371