hadoop单机安装
1. 配置环境变量
名称 | 路径 | 备注 |
---|---|---|
Java | /usr/java/jdk1.8.0_181 | |
Hadoop | /data01/apache/hadoop-2.9.2 | |
PATH | export JAVA_HOME=/usr/java/jdk1.8.0_181 export JRE_HOME={JAVA_HOME}/jre export CLASSPATH=.:{JAVA_HOME}/lib:{JRE_HOME}/lib:CLASSPATH export JAVA_PATH={JAVA_HOME}/bin:{JRE_HOME}/bin export PATH=PATH:{JAVA_PATH} export HADOOP_HOME=/data01/apache/hadoop-2.9.2 export PATH=NODE_HOME/bin:HADOOP_HOME/sbin:$PATH |
[root@localhost hadoop-2.9.2]# java -version
openjdk version "1.8.0_282"
OpenJDK Runtime Environment (build 1.8.0_282-b08)
OpenJDK 64-Bit Server VM (build 25.282-b08, mixed mode)
[root@localhost hadoop-2.9.2]# hadoop version
Hadoop 2.9.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 826afbeae31ca687bc2f8471dc841b66ed2c6704
Compiled by ajisaka on 2018-11-13T12:42Z
Compiled with protoc 2.5.0
From source with checksum 3a9939967262218aa556c684d107985
This command was run using /data01/apache/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar
[root@localhost hadoop-2.9.2]#
2. 修改配置文件
-
修改主机名
-
查看主机名
[root@localhost ~]# hostname localhost.localdomain [root@localhost ~]# uname -n localhost.localdomain
-
修改主机名
[root@localhost ~]# hostnamectl set-hostname hadoop1 [root@localhost ~]# vim /etc/hostname 127.0.0.1 hadoop1 [root@hadoop1 hadoop-2.9.2]# vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop1 [root@localhost ~]# hostname hadoop1
-
-
修改core-site.xml
<property> <name>fs.defaultFS</name> <!-- 这里填的是你自己的ip,端口默认--> <value>hdfs://hadoop1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <!-- 这里填的是你自定义的hadoop工作的目录,端口默认--> <value>/data01/apache/hadoop-2.9.2//tmp</value> </property> <property> <name>hadoop.native.lib</name> <value>false</value> <description>Should native hadoop libraries, if present, be used. </description> </property>
-
修改vim hadoop-env.sh,必须改不然提示Starting secondary namenodes [hadoop1]
root@hadoop1’s password:
root@hadoop1’s password: hadoop1: Permission denied, please try again.hadoop1: Error: JAVA_HOME is not set and could not be found.
# The java implementation to use. export JAVA_HOME=/usr/java/jdk1.8.0_181
-
修改vim hdfs-site.xml
<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.secondary.http.address</name> <!--这里是你自己的ip,端口默认--> <value>hadoop1:50090</value> </property>
-
修改mapred-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
-
修改yarn-site.xml
<property> <name>yarn.resourcemanager.hostname</name> <!-- 自己的ip端口默认 --> <value>hadoop1</value> </property> <!-- reducer获取数据的方式 --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>
3. 初始化环境
[root@hadoop1 hadoop-2.9.2]# hdfs namenode -format
21/04/18 10:22:15 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.9.2
4. 启动hadoop
[root@hadoop1 hadoop-2.9.2]# sh start-all.sh
sh: start-all.sh: No such file or directory
[root@hadoop1 hadoop-2.9.2]# vim /etc/profile
[root@hadoop1 hadoop-2.9.2]# source /etc/profile
[root@hadoop1 hadoop-2.9.2]# sh start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop1]
hadoop1: starting namenode, logging to /data01/apache/hadoop-2.9.2/logs/hadoop-root-namenode-hadoop1.out
localhost: starting datanode, logging to /data01/apache/hadoop-2.9.2/logs/hadoop-root-datanode-hadoop1.out
Starting secondary namenodes [hadoop1]
hadoop1: starting secondarynamenode, logging to /data01/apache/hadoop-2.9.2/logs/hadoop-root-secondarynamenode-hadoop1.out
starting yarn daemons
starting resourcemanager, logging to /data01/apache/hadoop-2.9.2/logs/yarn-root-resourcemanager-hadoop1.out
localhost: starting nodemanager, logging to /data01/apache/hadoop-2.9.2/logs/yarn-root-nodemanager-hadoop1.out
5. 访问前台页面