基于Shell的一条指令智能启动(关闭)Hadoop生态圈服务

写在前面

       对于很多学习大数据的同学来说,当学习大数据越来越深入的时候,服务器上需要安装的服务也越来越多,每次启动服务都需要操作一小会。而且很久不启动服务还可能忘记指令或启动方式,使用shell脚本可以完美的帮我们解决这个问题。


一、shell一次性启动

       我们先解决启动服务的问题,之后解决关闭服务的问题。先来从指令和服务看起:

所启动的服务指令和对应的服务名称有以下这些:

  • hadoop
    • start-dfs.sh => NameNode DataNode SecondaryNameNode
    • start-yarn.sh => NodeManager ResourceManager
  • hive
    • nohup hive --service metastore>/dev/null 2>&1 & => RunJar
    • nohup hive --service hiveserver2>/dev/null 2>&1 & => RunJar
  • zeppelin
    • /opt/software/hadoop/zeppelin082/bin/zeppelin-daemon.sh start => ZeppelinServer
  • zookeeper
    • zkServer.sh start => QuorumPeerMain
  • hbase
    • start-hbase.sh => HMaster HRegionServer
  • spark
    • bash /opt/software/hadoop/spark244/sbin/start-all.sh => Master Worker
  • kafka
    • kafka-server-start.sh -daemon /opt/software/kafka211200/config/server.properties => Kafka

TIPS: 因为hadoop和spark都有start-all.sh指令,为了指定启动spark,这里需要启动时指定完整路径

       代码提取: 百度云链接
       提  取  码:epp7

1.1 关键方法

  • 关联数组:关联数组的下标可以采用非整型类型,类似于java中的key-value类型。其中,key为下标,value为对应的元素的值,key唯一,value可以不唯一。这里使用关联数组可以帮我们定位到指定和服务,方便我们取值使用。

e.g.

declare -A country
country[Shanghai]="China"
country[Tokyo]="Japan"
country[Chicago]="America"

1.2 代码展示

#!/bin/bash
#定义命令与服务的关联数组
declare -A server
server[start-dfs.sh]="NameNode_DataNode_SecondaryNameNode"
server[start-yarn.sh]="NodeManager_ResourceManager"
server[nohup_hive_--service_metastore>/dev/null_2>&1_&]="RunJar"
server[nohup_hive_--service_hiveserver2>/dev/null_2>&1_&]="RunJar"
server[zkServer.sh_start]="QuorumPeerMain"
server[start-hbase.sh]="HMaster_HRegionServer"
server[bash_/opt/software/spark244/sbin/start-all.sh]="Master_Worker"
server[zeppelin-daemon.sh_start]="ZeppelinServer"
server[kafka-server-start.sh_-daemon_/opt/software/kafka211200/config/server.properties]="Kafka"

#定义启动项和命令的关联数组
declare -A case
case[dfs]="start-dfs.sh"
case[yarn]="start-yarn.sh"
case[metastore]="nohup_hive_--service_metastore>/dev/null_2>&1_&"
case[hiveserver2]="nohup_hive_--service_hiveserver2>/dev/null_2>&1_&"
case[zookeeper]="zkServer.sh_start"
case[hbase]="start-hbase.sh"
case[spark]="bash_/opt/software/spark244/sbin/start-all.sh"
case[zeppelin]="zeppelin-daemon.sh_start"
case[kafka]="kafka-server-start.sh_-daemon_/opt/software/kafka211200/config/server.properties"

#定义与启动参数对应的启动字符串
hadoop="dfs yarn"
kafka="zookeeper kafka"
spark="dfs yarn spark"
hive="dfs yarn metastore hiveserver2 zeppelin"
hbase="dfs yarn metastore hiveserver2 zeppelin zookeeper hbase"
all="dfs yarn metastore hiveserver2 zeppelin zookeeper hbase kafka spark"

function start(){
    sentence=$@
    #转为普通数组
    sentence=($sentence)
    for i in ${sentence[@]}
    do
        echo "start $i ..."
        #找到对应的启动命令
        tmp=${case[${i}]}
        #去除“_”
        new="${tmp//_/ }"
        #hive服务分开启动
        if [[ "$new" =~ ^nohup ]]
        then
            output=`eval "$new"`
            sleep 4s
        else
            #zookeeper服务分开启动(隐藏日志)
            if [[ "$new" =~ ^zkServer ]]
            then
                    output=`eval "$new>zklog.log 2>&1"`
                    sleep 2s
            else
                    output=`$new`
            fi
        fi
        #找到命令对应的服务名
        rst=${server[${tmp}]}
        #去除“_”
        rst=${rst//_/ }
        rst=($rst)
        length=${#rst[@]}
        count=0
        #根据服务数和启动数判断服务是否全部启动
        for j in ${rst[@]}
        do
            temp=`jps -lm|grep $j`
            if [ "$temp" ]
            then
                ((count++))
            fi
        done
        if [ $count -eq $length ]
        then
            echo "start $i success"
        else
            echo "Exception: start $i failed"
            exit 1
        fi
        echo
    done
}

#设定参数对应的启动的服务
if [ "$1" ]
then
    if [ "$1" = "hadoop" ]
    then
        start $hadoop
    fi

    if [ "$1" = "hive" ]
    then
        start $hive
    fi

    if [ "$1" = "hbase" ]
    then
        start $hbase
    fi

    if [ "$1" = "spark" ]
    then
        start $spark
    fi

    if [ "$1" = "kafka" ]
    then
        start $kafka
    fi

    if [ "$1" = "all" ]
    then
        start $all
    fi

    if [ "$1" != "hadoop" -a "$1" != "hive" -a "$1" != "kafka"  -a "$1" != "hbase" -a "$1" != "spark" -a "$1" != "all" ]
    then
        echo "Exception:unknown argument,please check again"
    fi
else
    echo "Exception:argument is null,just hadoop|spark|kafka|all supported"
fi

在这里插入图片描述


二、shell一次性关闭


       代码提取: 百度云链接
       提  取  码:vuvp

2.1 代码中的注意点

  • 关闭hive和关闭其他服务不同,需要手动关闭进程–>‘kill -9 THREAD_NUMBER’;
    • 需要将hive服务对应的进程号找到(服务名为RunJar),并搭配‘kill -9’;
  • zookeeper服务会输出日志,所以需要输出重定向指定输入到目标文件zklog.log;
  • 判定服务是否关闭,需要在服务关闭后查看当前服务中是否存在已关闭的服务,有则代表服务关闭失,;但此时不退出,程序会继续后续的其他服务

Tips:关闭HBase时可能会长时间等待,可以考虑用关闭hive服务的方式关闭HBase


2.2 代码展示

#!/bin/bash
#hbase关闭延时过久(可以考虑用关闭hive方法关闭)
declare -A  case
case[spark]="/opt/software/spark244/sbin/stop-all.sh"
case[hbase]="stop-hbase.sh"
case[zookeeper]="zkServer.sh_stop"
case[zeppelin]="zeppelin-daemon.sh_stop"
case[hive]="kill_-9"
case[yarn]="stop-yarn.sh"
case[dfs]="stop-dfs.sh"
case[kafka]="kafka-server-stop.sh"

declare -A server
server[dfs]="NameNode_DataNode_SecondaryNameNode"
server[yarn]="NodeManager_ResourceManager"
server[hive]="RunJar"
server[zookeeper]="QuorumPeerMain"
server[zeppelin]="ZeppelinServer"
server[hbase]="HMaster_HRegionServer"
server[spark]="Master_Worker"
server[kafka]="Kafka"

hadoop="dfs yarn"
hive="dfs yarn hive zeppelin"
kafka="kafka zookeeper"
hbase="dfs yarn hive zeppelin hbase"
spark="dfs yarn spark"
all="dfs yarn hive zeppelin kafka hbase zookeeper spark"

function stop(){
    sentence=$@
    sentence=($sentence)
    for i in ${sentence[@]}
    do
        echo "start to shutdown $i..."
        tmp=$i
        new=${case[${i}]}
        new=${new//_/ }
        if [[ "$new" =~ ^kill ]]
        then
            number=`eval "jps|grep RunJar"`
            number=($number)
            for k in ${number[@]}
            do
                if [[ $k =~ ^[0-9] ]]
                then
                    rst=`eval "${new} ${k}"`
                fi
            done
        else
            if [[ "$new" =~ ^zkServer ]]
            then
                output=`eval "$new>zklog.log 2>&1"`
            else
                rst=`$new`
                echo $rst
            fi
        fi

        surplus=${server[${i}]}
        surplus=${surplus//_/ }
        surplus=($surplus)
        count=0
        for j in ${surplus[@]}
        do
            temp=`jps -lm|grep $j`
            if [ "$temp" ]
            then
                ((count++))
            fi
        done
        if [ $count -ne 0 ]
        then
            echo "Exception: shutdown $i failed"
        else
            echo "shutdown $i success"
        fi
        echo
    done
}


if [ "$1" ]
    then
        if [ "$1" = "hadoop" ]
        then
            stop $hadoop
        fi

        if [ "$1" = "hive" ]
        then
            stop $hive
        fi

        if [ "$1" = "hbase" ]
        then
            stop $hbase
        fi

        if [ "$1" = "spark" ]
        then
            stop $spark
        fi

        if [ "$1" = "kafka" ]
        then
            stop $kafka
        fi

        if [ "$1" = "all" ]
        then
            stop $all
        fi

        if [ "$1" != "hadoop" -a "$1" != "hive" -a "$1" != "spark" -a "$1" != "kafka"  -a "$1" != "all" ]
        then
            echo "Exception:unknown argument,please check again"
        fi
else
    echo "Exception:argument is null,just hadoop|hive|spark|kafka|all supported"
fi


结果截图:可以看到,关闭hbase和spark时都显示关闭失败了,但是后台查看进程时服务都关闭了;原因是关闭这两个服务时出现了延时,所以在校验服务时由于其命令的延时所以还可以找到它们的服务,才显示关闭失败!实际上服务已经被关闭掉了!

Tips:可以在HBase和saprk服务关闭后,加入休眠,这样可以避免出现服务被关闭但是脚本提示服务胃被关闭的情况!

在这里插入图片描述


PS:如果有写错或者写的不好的地方,欢迎各位大佬在评论区留下宝贵的意见或者建议,敬上!如果这篇博客对您有帮助,希望您可以顺手帮我点个赞!不胜感谢!

原创作者:wsjslient

作者主页:https://blog.csdn.net/wsjslient


  • 3
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值