一、下载jdk8并配置环境变量
二、下载hadoop2.7.2 解压(tar -zxvf 文件名)
三、(1)修改相关配置文件 hadoop-env.sh yarn-env.sh mapred-env.sh 在其中增加java的环境变量配置 export JAVA_HOME=/srv/jdk8
(2)修改 core-sit.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.1.102:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/srv/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
</configuration>
(3)修改 yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>192.168.1.102:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>192.168.1.102:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.1.102:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>192.168.1.102:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>192.168.1.102:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>768</value>
</property>
</configuration>
(4)修改 mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name><value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.1.102:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.1.102:19888</value>
</property>
</configuration>
(5) 修改hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/srv/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/srv/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.1.102:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
四、将hadoop安装文件拷贝到其他服务器中
scp - r hadoop 192.168.1.103:/srv
scp - r hadoop 192.168.1.104:/srv
五、初始化相关配置文件bin/hdfs namenode -format
六、启动hadoop
sbin/start-all.sh
服务器中需要配置ssh免密码登录
配置方式:
执行ssh-keygen -t rsa命令 会生成id_rsa.pub(公钥) id_rsa(私钥)
cat id_rsa.pub >>authorized_keys 将生成的公钥附加到authorized_keys中
然后将authorized_keys文件存储到其他服务器上