单节点hadoop3安装。
1,安装jdk
tar -zxf jdk1.8.0_144.tar.gz
sudo ln -s /opt/package/jdk1.8.0_144/ /usr/java/latest
2,ssh免密码登录
ssh-keygen -t rsa
ssh-copy-id vb-7
3,安装hadoop
tar -zxf hadoop-3.1.0.tar.gz
rm -f bin/*.cmd(强迫症)
rm -f sbin/*.cmd
rm -f etc/hadoop/*.cmd
环境变量:
# jdk
export JAVA_HOME=/usr/java/latest
export PATH=$PATH:$JAVA_HOME/bin
# hadoop
export HADOOP_HOME=/opt/package/hadoop-3.1.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
vi etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/latest
4,配置文件。
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://vb-7:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/package/data/hadoop</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
格式化namenode
hdfs namenode -format
启动dfs
start-dfs.sh
hdfs dfs -mkdir -p /user/dota/input
hdfs dfs -put etc/hadoop/*.xml input
启动yarn
start-yarn.sh
执行example
按照官方的配置,执行这两个example报异常。
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar grep input output 'dfs[a-z.]+'
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar wordcount /user/dota/input/* output
异常:默认container是使用所有能获取的内存。
mapreduce.map.memory.mb 默认配置是-1
mapred-site.xml中配置限制container使用的内存。
<property>
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx512M</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx512M</value>
</property>
参考:
https://www.cnblogs.com/scw2901/p/4331682.html