6)配置文件:core-site.xml
[html] view plain copy
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://master77:9000</value>
- </property>
- <property>
- <name>io.file.buffer.size</name>
- <value>131072</value>
- </property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>file:/usr/local/hadoop/tmp</value>
- <description>Abase for other temporary directories.</description>
- </property>
- </configuration>
7)配置文件:hdfs-site.xml
[html] view plain copy
- <configuration>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>master77:9001</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:/usr/local/hadoop/dfs/name</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:/usr/local/hadoop/dfs/data</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.webhdfs.enabled</name>
- <value>true</value>
- </property>
- </configuration>
8)配置文件:mapred-site.xml
先创建然后编辑
cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
gedit etc/hadoop/mapred-site.xml
[html] view plain copy
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>master77:10020</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value>master77:19888</value>
- </property>
- </configuration>
9)配置文件:yarn-site.xml
[html] view plain copy
- <configuration>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
- <value>org.apache.hadoop.mapred.ShuffleHandler</value>
- </property>
- <property>
- <name>yarn.resourcemanager.address</name>
- <value>master77:8032</value>
- </property>
- <property>
- <name>yarn.resourcemanager.scheduler.address</name>
- <value>master77:8030</value>
- </property>
- <property>
- <name>yarn.resourcemanager.resource-tracker.address</name>
- <value>master77:8031</value>
- </property>
- <property>
- <name>yarn.resourcemanager.admin.address</name>
- <value>master77:8033</value>
- </property>
- <property>
- <name>yarn.resourcemanager.webapp.address</name>
- <value>master77:8088</value>
- </property>
- </configuration>
10)将hadoop传输到slave78和slave79 /usr/local/hadoop目录,(如果传输时报错说 :权限拒绝,先把文件传送到非/usr目录下,然后在node上把这个文件再移动到/usr/hadoop)
scp -r /usr/local/hadoop hadoop@192.168.8.78:/usr/local/hadoop
7、配置环境变量,并启动hadoop,检查是否安装成功
1)配置环境变量
#编辑/etc/profile sudo gedit /etc/profile #以上已经添加过java的环境变量,在后边添加就可以
- #hadoop
- export HADOOP_HOME=/usr/local/hadoop
- export PATH=$PATH:$HADOOP_HOME/sbin
- export PATH=$PATH:$HADOOP_HOME/bin
执行
source /etc/profile
使文件生效。
2)启动hadoop,进入hadoop安装目录
bin/hdfs namenode -format sbin/start-all.sh
3)启动后分别在master, slave下输入jps查看进程
看到下面的结果,则表示成功。
Master77:
hadoop@master77:/usr/local/hadoop$ jps
4706 Jps
3653 ResourceManager
4357 SecondaryNameNode
4121 NameNode
slave79:
hadoop@slave79:~$ jps
3643 Jps
3149 DataNode
3294 NodeManager
hadoop@slave79:~$
补充:
【 分开启动】的问题
start-all 等于 先start dfs,后start yarn
遇到 ERROR: Can't get master address from ZooKeeper; znode data == null ,启动时分开启动,就解决了