漫谈hadoop启动脚本

本文详细介绍了Hadoop启动脚本的作用和区别,包括hadoop-daemon.sh、hadoop-daemons.sh、yarn-daemon.sh、yarn-daemons.sh、start-dfs.sh、stop-dfs.sh、start-yarn.sh、stop-yarn.sh、start-all.sh和stop-all.sh。通过分析源码,揭示这些脚本如何调用守护进程脚本并从配置文件获取参数来启动服务。
摘要由CSDN通过智能技术生成

首先说一下之前开启服务的脚本,因为习惯,,,所以之前测试是否配置成功的时候直接就把脚本贴上去了,但是好多人问那个脚本之间的区别,所以就有这篇博客了,现在详细说说这些脚本的作用和区别联系:

大家每次启动的时候都是sbin/+脚本名;实际上所有启动和关闭的脚本都在这个文件中,所以打开这个文件:

[super-yong@bigdata-01 sbin]$ ll
total 92
-rwxr-xr-x 1 super-yong super-yong 2752 Aug 17  2016 distribute-exclude.sh
-rwxr-xr-x 1 super-yong super-yong 6452 Aug 17  2016 hadoop-daemon.sh
-rwxr-xr-x 1 super-yong super-yong 1360 Aug 17  2016 hadoop-daemons.sh
-rwxr-xr-x 1 super-yong super-yong 1427 Aug 17  2016 hdfs-config.sh
-rwxr-xr-x 1 super-yong super-yong 2291 Aug 17  2016 httpfs.sh
-rwxr-xr-x 1 super-yong super-yong 3128 Aug 17  2016 kms.sh
-rwxr-xr-x 1 super-yong super-yong 4080 Aug 17  2016 mr-jobhistory-daemon.sh
-rwxr-xr-x 1 super-yong super-yong 1648 Aug 17  2016 refresh-namenodes.sh
-rwxr-xr-x 1 super-yong super-yong 2145 Aug 17  2016 slaves.sh
-rwxr-xr-x 1 super-yong super-yong 1471 Aug 17  2016 start-all.sh
-rwxr-xr-x 1 super-yong super-yong 1128 Aug 17  2016 start-balancer.sh
-rwxr-xr-x 1 super-yong super-yong 3734 Aug 17  2016 start-dfs.sh
-rwxr-xr-x 1 super-yong super-yong 1357 Aug 17  2016 start-secure-dns.sh
-rwxr-xr-x 1 super-yong super-yong 1347 Aug 17  2016 start-yarn.sh
-rwxr-xr-x 1 super-yong super-yong 1462 Aug 17  2016 stop-all.sh
-rwxr-xr-x 1 super-yong super-yong 1179 Aug 17  2016 stop-balancer.sh
-rwxr-xr-x 1 super-yong super-yong 3206 Aug 17  2016 stop-dfs.sh
-rwxr-xr-x 1 super-yong super-yong 1340 Aug 17  2016 stop-secure-dns.sh
-rwxr-xr-x 1 super-yong super-yong 1340 Aug 17  2016 stop-yarn.sh
-rwxr-xr-x 1 super-yong super-yong 4295 Aug 17  2016 yarn-daemon.sh
-rwxr-xr-x 1 super-yong super-yong 1353 Aug 17  2016 yarn-daemons.sh
[super-yong@bigdata-01 sbin]$

好了现在我们一个个的来看:

我们用到的脚本有以下:

hadoop-daemon.sh
hadoop-daemons.sh

yarn-daemon.sh
yarn-daemons.sh

start-dfs.sh
stop-dfs.sh

start-yarn.sh
stop-yarn.sh

start-all.sh
stop-all.sh

ok,我们一个个打开单独来看:

hadoop-daemon.sh

# Runs a Hadoop command as a daemon.//作为守护进程运行Hadoop命令。
#
# Environment Variables//环境变量:
#
#   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_PREFIX}/conf.
#   //HADOOP_CONF_DIR备用conf目录。默认值是$ { HADOOP_PREFIX } / conf。
#   HADOOP_LOG_DIR   Where log files are stored.  PWD by default.
#   //存储日志文件的HADOOP_LOG_DIR文件。PWD默认情况下。
#   HADOOP_MASTER    host:path where hadoop code should be rsync'd from
#   //hadoop主机:应该同步hadoop代码的路径
#   HADOOP_PID_DIR   The pid files are stored. /tmp by default.
#   //pid文件被存储。默认/ tmp。
#   HADOOP_IDENT_STRING   A string representing this instance of hadoop. $USER by default
#   //表示hadoop实例的字符串。$ USER默认情况下
#   HADOOP_NICENESS The scheduling priority for daemons. Defaults to 0.
#   //守护进程的调度优先级。默认值为0。


usage="Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] [--script script] (start|stop) <hadoop-command> <args...>"
#使用这个脚本需要给它传入参数

# if no args specified, show usage//如果没有指定参数,则显示使用情况
if [ $# -le 1 ]; then
  echo $usage
  exit 1
fi

bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin"; pwd`

DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/hadoop-config.sh

# get arguments//获取参数

#default value//默认值
hadoopScript="$HADOOP_PREFIX"/bin/hadoop
if [ "--script" = "$1" ]
  then
    shift
    hadoopScript=$1
    shift
fi
startStop=$1
shift
command=$1
shift
#写入日志文件
hadoop_rotate_log ()
{
    log=$1;
    num=5;
    if [ -n "$2" ]; then
        num=$2
    fi
    if [ -f "$log" ]; then # rotate logs
        while [ $num -gt 1 ]; do
            prev=`expr $num - 1`
            [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num"
            num=$prev
        done
        mv "$log" "$log.$num";
    fi
}

if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
  . "${HADOOP_CONF_DIR}/hadoop-env.sh"
fi

# Determine if we're starting a secure datanode, and if so, redefine appropriate variables//确定我们是否启动了一个安全的datanode,如果是,重新定义适当的变量
if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then
  export HADOOP_PID_DIR=$HADOOP_SECURE_DN_PID_DIR
  export HADOOP_LOG_DIR=$HADOOP_SECURE_DN_LOG_DIR
  export HADOOP_IDENT_STRING=$HADOOP_SECURE_DN_USER
  starting_secure_dn="true"
fi

#Determine if we're starting a privileged NFS, if so, redefine the appropriate variables//确定是否启动特权NFS,如果是,重新定义适当的变量
if [ "$command" == "nfs3" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_PRIVILEGED_NFS_USER" ]; th
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值