目录
一、Hadoop
在master机器上运行
(1)开启HDFS集群:start-hdfs.sh
(2)开启Yarn集群:start-yarn.sh
(4)关闭HDFS集群:stop-hdfs.sh
(5)关闭Yarn集群:stop-yarn.sh
(6)一键关闭hdfs和yarn:stop-all.sh
二、zookeeper
(1)开启zookeeper(每台机器都得执行)
zkServer.sh start
(2)关闭zookeeper
zkServer.sh stop
(3)查看zookeeper状态
zkServer.sh status
(4)编写脚本一键开启zookeeper命名为start_zk.sh
通过在该目录下执行sh start_zk.sh 开启zookeeper
#!/bin/sh
for host in hadoop01 hadoop02 hadoop03
do
ssh root@$host "/export/servers/zookeeper-3.4.10/bin/zkServer.sh start"
done
(5)编写脚本一键关闭zookeeper
#!/bin/sh
for host in hadoop01 hadoop02 hadoop03
do
ssh root@$host "/export/servers/zookeeper-3.4.10/bin/zkServer.sh stop"
done
三、kafka
(1)启动Kafka
——在kafka安装目录的下执行(每台机器都得执行)
bin/zookeeper-server-start.sh config/zookeeper.properties
(2)编写脚本一键启动多台机器的kafka服务
脚本可放在kafka安装目录的bin目录下命名为kafka_start_all.sh
要开启时直接在该目录下输入kafka_start_all.sh即可开启服务
#!/bin/sh
for host in hadoop01 hadoop02 hadoop03
do
ssh $host "source /etc/profile;nohup /export/servers/kafka_2.11-2.0.0/bin/kafka-server-start.sh /export/servers/kafka_2.11-2.0.0/config/server.properties >/dev/null 2>&1 &"
echo "$host kafka is running"
done
四、hbase
在master机器上运行
(1)开启hbase
start-hbase.sh
(2)关闭hbase
stop-hbase.sh
(3)进入HBase的shell命令行模式
hbase shell