部署spark1.6.0 standalone模式

  1. 下载 spark1.6.0
  2. 解压 tar -xvf spark-1.6.0-bin-hadoop2.6.tgz
  3. 复制到/usr/local sudo cp -r spark-1.6.0-bin-hadoop2.6 /usr/local/spark
  4. 修改目录归属为hadoop sudo chown -R hadoop:users /usr/local/spark/
  5. 配置spark-env.sh
export JAVA_HOME=/usr/java/
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF=/usr/local/hadoop/etc/hadoop
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=4g
export SPARK_EXECUTOR_MEMORY=2g
export SPARK_DRIVER_MEMORY=2g
export SPARK_WORKER_CORES=4
  1. 配置slaves
slave1
slave2
  1. 配置spark-defaults.conf
spark.eventLog.enabled             true
spark.eventLog.dir                 hdfs://master:9000/historyserverforSpark
spark.yarn.historyServer.address   master:18080
spark.history.fs.logDirectory      hdfs://master:9000/historyserverforSpark
spark.executor.extraJavaOptions    -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

  1. 配置~/.bashrc中环境变量
SPARK_HOME=/usr/local/spark

PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin

export JAVA_HOME CLASSPATH SPARK_HOME
  1. 将spark目录以及.bashrc同步到从节点
hadoop@master:~> scp -r /usr/local/spark/ root@slave1:/usr/local
hadoop@master:~> scp -r /usr/local/spark/ root@slave2:/usr/local
hadoop@master:~> scp ~/.bashrc hadoop@slave1:/home/hadoop
hadoop@master:~> scp ~/.bashrc hadoop@slave2:/home/hadoop
  1. 修改从节点hadoop目录权限
hadoop@slave1:~> sudo chown -R hadoop:users /usr/local/spark/
hadoop@slave2:~> sudo chown -R hadoop:users /usr/local/spark/
  1. 启动spark,执行sh start-all.sh
  2. 启动日志历史服务,首次启动前须创建hdfs路径
hadoop@master:/usr/local/spark/sbin> hadoop fs -mkdir hdfs://master:9000/historyserverforSpark
20/07/11 11:34:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hadoop@master:/usr/local/spark/sbin> sh start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.history.HistoryServer-1-master.out
hadoop@master:/usr/local/spark/sbin>
  1. 测试样例程序
hadoop@master:/usr/local/spark/bin> ./spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master spark://master:7077 \
> ../lib/spark-examples-1.6.0-hadoop2.6.0.jar
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
...
  1. 停止命令
/usr/local/spark/sbin/stop-history-server.sh
/usr/local/spark/sbin/stop-all.sh
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值