2脚本分析
2.1脚本分析start-all.sh
1)首先进入/soft/hadoop/sbin目录
2)nano start-all.sh
sbin/start-all.sh主要做的事情:
1) libexec/hadoop-config.sh
2) start-dfs.sh
3) start-yarn.sh
3)cat libexec/hadoop-config.sh hadoop_conf_dir =etc/hadoop
3)cat sbin/start-dfs.sh
sbin/start-dfs.sh的作用:
1)libexec/hadoop-config.sh
2)sbin/hadoop-daemons.sh --config .. --hostname .. start namenode ...
3)sbin/hadoop-daemons.sh --config .. --hostname .. start datanode ...
4)sbin/hadoop-daemons.sh --config .. --hostname .. start sescondarynamenode ...
5)sbin/hadoop-daemons.sh --config .. --hostname .. start zkfc ... //
3)cat sbin/start-yarn.sh
libexec/yarn-config.sh作用
sbin/yarn-daemon.sh start resourcemanager
sbin/yarn-daemons.sh start nodemanager
6)cat sbin/hadoop-daemons.sh
sbin/hadoop-daemons.sh的作用:
libexec/hadoop-config.sh
slaves
hadoop-daemon.sh
6)cat sbin/hadoop-daemon.sh
sbin/hadoop-daemon.sh的作用
libexec/hadoop-config.sh
bin/hdfs ...
8)sbin/yarn-daemon.sh
sbin/yarn-daemon.sh作用
libexec/yarn-config.sh
bin/yarn
3.单独启动和关闭hadoop服务
1)启动名称节点
hadoop-daemon.sh start namenode
2) 启动数据节点
hadoop-daemons.sh start datanode slave
3)hadoop-daemon.sh start secondarynamenode
4)查看端口50070
netstat -anop | grep 500
5)开启resourcemanager
yarn-daemon.sh start resourcemanager
6)开启nodemanager
bin/yarn-daemons.sh start nodemanager
4停止一个数据节点
hadoop-daemon.sh stop datanode
重新开启
hadoop-daemon.sh start datanode
在大神面前献丑了,写一个文章真不容易,写的不好地方希望大家多多包涵,希望对大家有用,参考视频地址http://pan.baidu.com/s/1pLk7f6N