Spark在集群上提交任务的脚本

 spark启动脚本:

###hadoop配置文件
export HADOOP_CONF_DIR=/etc/hadoop/conf
#spark-submit路径
sparksubmits="/opt/cloudera/parcels/SPARK2/bin/spark2-submit"
#jar包所在本地目录
jars="/usr/java/checkpoint/SSE_ST2_ANALYSIS_SPARK.jar"

echo "begin running NewsStreamimgClusterDriver model"

#使用spark-submit提交spark程序
su root -c "$sparksubmits --class cn.com.trs.topic.news.streaming.NewsStreamingClusterDriver \
--master yarn \
--driver-cores 1 \
--driver-memory 2g \
--deploy-mode cluster \
--executor-cores 1 \
--num-executors 16 \
--executor-memory 1g \
--name NewsStreamingClusterDriver \
$jars XZ 4 20 \
"
echo "finished!"

l里面用到的spark2-submit的命令脚本

#!/bin/bash
  # Reference: http://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in
  SOURCE="${BASH_SOURCE[0]}"
  BIN_DIR="$( dirname "$SOURCE" )"
  while [ -h "$SOURCE" ]
  do
    SOURCE="$(readlink "$SOURCE")"
    [[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE"
    BIN_DIR="$( cd -P "$( dirname "$SOURCE"  )" && pwd )"
  done
  BIN_DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
  CDH_LIB_DIR=$BIN_DIR/../../CDH/lib
  LIB_DIR=$BIN_DIR/../lib
export HADOOP_HOME=$CDH_LIB_DIR/hadoop

# Autodetect JAVA_HOME if not defined
. $CDH_LIB_DIR/bigtop-utils/bigtop-detect-javahome

exec $LIB_DIR/spark2/bin/spark-submit "$@"

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值