- 1.vi /etc/profile
export HADOOP_HOME=/opt/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_INSTALL=$HADOOP_HOME
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
source /etc/profile
输入hadoop version检查环境变量是否配置成功
-
2.进入到hadoop/etc/hadoop目录下
(1)vi core-site.xml
(2)vi hdfs-site.xml
(3)vi hadoop-env.sh
调整好:JAVA_HOME=…(4)
mv mapreduce-site.xml.templet mapreduce-site.xml
vi mapreduce-site.xml
(5)vi yarn-site.xml
(6)vi slaves
将内容修改为主机名 -
3.添加互信免密
cd ~
ssh-keygen
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
ssh-copy-id -i .ssh/id_rsa.pub -p22 用户名@主机名
- 4.配置完成后,格式化
hdfs namenode -format
/or
hadoop namenode -format
- 5.启动服务
Start-dfs.sh
Start-yarn.sh
or
start-all.sh
- 6.如果出现问题,需要重新格式化前,最好先停止所有服务
stop-all.sh
然后,rm -rf tmp/
rm -rf log/
查找问题并解决后,再次格式化