一、配置hadoop集群
一共有7个文件要修改:
hadoop-2.7.1/etc/hadoop/hadoop-env.sh
hadoop-2.7.1/etc/hadoop/yarn-env.sh
hadoop-2.7.1/etc/hadoop/core-site.xml
hadoop-2.7.1/etc/hadoop/hdfs-site.xml
hadoop-2.7.1/etc/hadoop/mapred-site.xml
hadoop-2.7.1/etc/hadoop/yarn-site.xml
hadoop-2.7.1/etc/hadoop/slaves
hadoop-2.7.1/etc/hadoop/hadoop-env.sh
hadoop-2.7.1/etc/hadoop/yarn-env.sh
hadoop-2.7.1/etc/hadoop/core-site.xml
hadoop-2.7.1/etc/hadoop/hdfs-site.xml
hadoop-2.7.1/etc/hadoop/mapred-site.xml
hadoop-2.7.1/etc/hadoop/yarn-site.xml
hadoop-2.7.1/etc/hadoop/slaves
(1)hadoop-env.sh 、yarn-env.sh
这二个文件主要是修改JAVA_HOME改成实际本机jdk所在目录位置
执行命令
$gedit etc/hadoop/hadoop-env.sh (及 vi etc/hadoop/yarn-env.sh)
打开文件找到下面这行的位置,改成(jdk目录位置,大家根据实际情况修改)
export JAVA_HOME=/home/hadoop/jdk_1.8.0_45
在 hadoop-env.sh中加上这句:
export HADOOP_PREFIX=/home/hadoop/hadoop-2.7.1
执行命令
$gedit etc/hadoop/hadoop-env.sh (及 vi etc/hadoop/yarn-env.sh)
打开文件找到下面这行的位置,改成(jdk目录位置,大家根据实际情况修改)
export JAVA_HOME=/home/hadoop/jdk_1.8.0_45
在 hadoop-env.sh中加上这句:
export HADOOP_PREFIX=/home/hadoop/hadoop-2.7.1
(2) core-site.xml
参考下面的内容修改:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
注:/home/hadoop/tmp 目录如不存在,则先mkdir手动创建
core-site.xml的完整参数请参考
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/core-default.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
注:/home/hadoop/tmp 目录如不存在,则先mkdir手动创建
core-site.xml的完整参数请参考
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/core-default.xml
(3)hdfs-site.xml
参考下面的内容修改:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:50020</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/data/namenode</value>
</property>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:50020</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/data/datanode</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>slave1:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
hdfs-site.xml的完整参数请参考
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/data/datanode</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>slave1:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
hdfs-site.xml的完整参数请参考
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
(4)mapred-site.xml
参考下面的内容修改:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
mapred-site.xml的完整参数请参考
http://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
http://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
(5)yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8040</value>
</property>
</configuration>
yarn-site.xml的完整参数请参考
http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8040</value>
</property>
</configuration>
yarn-site.xml的完整参数请参考
http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
(6)slaves
vim slaves
修改为: master
slave1
这样master和slave1上就有datanode进程
把hadoop-2.7.1文件夹拷贝到其他机器上覆盖。
二、测试hadoop配置
在master主机上
执行命令:
hdfs namenode -format
出现has been successfully formatted 表示格式化成功,否则按照错误提示修改配置文件。
如果命令不能执行 检查/etc/profile文件,配置正确的环境变量,保存退出并且执行命令:source /etc/profile
执行命令:
start-dfs.sh
使用jps查看
jps
显示进程NameNode、DataNode、说明成功
在slave1上用jps查看进程,显示进程DataNode、SecondaryNameNode说明成功
执行命令:
start-yarn.sh
查看进程,多了ResourceManager、NodeManager。
在slave1上查看进程,多了NodeManager说明成功。
停止命令:stop-dfs.sh stop-yarn.sh 记得重复测试的时候先停止服务再开启,保证准确性。
web界面检查hadoop
hdfs管理界面 http://master:50070/
yarn的管理界面不再是原来的50030端口而是8088 http://master:8088/
hdfs管理界面 http://master:50070/
yarn的管理界面不再是原来的50030端口而是8088 http://master:8088/
//不能访问的话就把主机名改成IP地址
查看hadoop状态
hdfs dfsadmin -report 查看hdfs的状态报告
yarn node -list 查看yarn的基本信息
hdfs dfsadmin -report 查看hdfs的状态报告
yarn node -list 查看yarn的基本信息
//配置文件的修改一定要仔细,一点点错误都会导致格式化错误或者之后的错误。
//如果start命令之后显示的进程不对,就把所有机子的用户目录下面的data目录下的文件都删除,然后在master主机上再执行hdfs namenode -format 格式化一次。
三、使用hadoop集群运行例子
运行自带mapreduce例子grep
执行命令
$hadoop fs -mkdir /input
$hadoop fs -put etc/hadoop/*.xml /input
$hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[a-z.]+'
$hadoop fs -get output /home/hadoop/output
查看结果
$cat output/*
1 dfsadmin
得到和单机一样的结果,'dfs'只出现一次
执行命令
$hadoop fs -mkdir /input
$hadoop fs -put etc/hadoop/*.xml /input
$hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[a-z.]+'
$hadoop fs -get output /home/hadoop/output
查看结果
$cat output/*
1 dfsadmin
得到和单机一样的结果,'dfs'只出现一次
//运行hadoop fs命令 有时会提示安全模式而不能执行修改删除操作,可以退出安全模式,但是不推荐,进入安全模式可能是因为主机之间进程不同步,把所有主机停止服务,再开启,或者直接重启系统。
//关于hadoop fs 的各种shell命令: http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-common/FileSystemShell.html