环境变量配置:
#java8
export JAVA_HOME=/opt/jdk1.8.0_121
export PATH=$JAVA_HOME/bin:$PATH
#hadoop
export HADOOP_HOME=/opt/hadoop-2.7.7
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
hadoop-env.sh配置:
export JAVA_HOME=/opt/jdk1.8.0_121/
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
配置互信:
ssh-keygen -t rsa
一直回车到结束
ssh-copy-id localhost
按照提示完成互信
hdfs-site.xml配置:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.datanode.max.transfer.threads</name>
<value>4096</value>
</property>
</configuration>
yarn-site.xml配置:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml配置:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
格式化文件系统:
hdfs namenode -format
启动name节点和data节点的守护进程:
start-dfs.sh
start-yarn.sh
配置完成后可以打开相应的UI界面
hdfs操作见:https://blog.csdn.net/hongxiao2016/article/details/88915053