hadoop2 集群安装部署
1.上传解压到/opt里
2.准备搭建环境
mkdir -p /opt/hadoop/dfs/name
mkdir -p /opt/hadoop/dfs/data
mkdir -p /opt/hadoop/temp
3.cd进入etc/hadoop里,准备修改7个文件:
1.hadoop-env.sh
2.yarn-env.sh
3.mapred-site.xml
4.yarn-site.xml
5.hdfs-site.xml
6.core-site.xml
7.workers/slaves
1.配置并进入 hadoop-env.sh 文件:
vi hadoop-env.sh
修改其中的JAVA_HOME
export JAVA_HOME=【java的配置路径】
2.配置并进入 yarn-env.sh 文件:
vi yarn-env.sh
修改其中的JAVA_HOME
JAVA_HOME=
【java的配置路径】```
3.配置并进入 mapred-site.xml 文件:
vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
4.配置并进入 yarn-site.xml 文件:
vi yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanger.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.shuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
5.配置并进入 hdfs-site.xml 文件:
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
6.配置并进入 core-site.xml 文件:
vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/hadoop/temp</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.group</name>
<value>*</value>
</property>
</configuration>
7.配置并进入 workers 文件:
vi workers
slave1
slave2
或 (有哪个改哪个文件,另一个不改)
vi slaves
slave1
slave2
4.分发hadoop到其他两个节点上
5.格式化Namenode
cd /opt/hadoop
./bin/hdfs namenode -format
6.自行配置环境变量并source
7.启动集群:
./start-all.sh
或:
1. ./start-dfs.sh
2. ./start-yarn.sh