192.168.94.100 master
192.168.94.101 slaver01
192.168.94.102 slaver02
先开启三台虚拟机!!
所有基础都在启动hadoop以后
【hadoop】
查看所有进程:
jpsall
hadoop启动:只需要在主节点master启动!!
start-all.sh
hadoop关闭:只需要在主节点master关闭!!
stop-all.sh
namenode
http://192.168.94.100:9870/dfshealth.html#tab-datanode
yarn
http://192.168.94.100:8088/cluster
hadoop pi 计算示例
hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar pi 10 10
【mysql】只需要在主节点master启动!!
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123@Hhhh' WITH GRANT OPTION;
mysql -u root -p
密码:123@Hhhh
启动mysql
systemctl start mysqld
关闭mysql
systemctl stop mysqld
【hive】只需要在主节点mastetr任意路径执行
本地模式:
进入hive命令窗口
hive
远程模式:
启动metastore
hive --service metastore &
启动 hiveserver2
hive --service hiveserver2 &
HiveServer2:
http://192.168.94.100:10002/
master上执行:
beeline
!connect jdbc:hive2://localhost:10000