环境介绍:
在两台装有centos6.4(32位)的服务器上安装Hadoop-2.5.1分布式集群(2台机器,主要试验用,哈哈)。
1.修改主机名和/etc/hosts文件
1)修改主机名(非必要)
- vi /etc/sysconfig/network
- HOSTNAME=XXX
重启后生效。
2)/etc/hosts是ip地址和其对应主机名文件,使机器知道ip和主机名对应关系,格式如下:
- #IPAddress HostName
- 192.168.1.67 MasterServer
- 192.168.1.241 SlaveServer
2.配置免密码登陆SSH
1)生成密钥:
- ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
2)将id_dsa.pub(公钥)追加到授权的key中:
- cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
3)将认证文件复制到其它节点上:
- scp ~/.ssh/authorized_keys hadooper@192.168.1.241:~/.ssh/
4)测试:
- ssh SlaveServer
第一次要确认连接,输入yes即可。
但我的仍要求输入密码,原因是.ssh和authorized_keys权限不对,具体见:http://blog.csdn.net/hwwn2009/article/details/39852457
3.各节点上安装jdk
1)选择的版本是jdk-6u27-linux-i586.bin,下载地址:http://pan.baidu.com/s/1mgICcFA
2)上传到hadooper用户目录下,添加执行权限
- chmod 777 jdk-6u27-linux-i586.bin
- ./jdk-6u27-linux-i586.bin
- #JAVA_HOME
- export JAVA_HOME=/usr/lib/jvm/jdk1.6/jdk1.6.0_27
- export PATH=$JAVA_HOME/bin:$PATH
6 ) 执行java –version查看jdk版本,验证是否成功。
4. Hadoop安装
每台节点都要安装hadoop。上传hadoop-2.5.1.tar.gz到用户hadooper目录下。
1)解压
- tar -zvxf hadoop-2.5.1.tar.gz
- export HADOOP_HOME=/home/hadooper/hadoop/hadoop-2.5.1
- export HADOOP_COMMON_HOME=$HADOOP_HOME
- export HADOOP_HDFS_HOME=$HADOOP_HOME
- export HADOOP_MAPRED_HOME=$HADOOP_HOME
- export HADOOP_YARN_HOME=$HADOOP_HOME
- export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
- export CLASSPATH=.:$JAVA_HOME/lib:$HADOOP_HOME/lib:$CLASSPATH
- export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
- source /etc/profile
(1)core-site.xml
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://MasterServer:9000</value>
- </property>
(2)hdfs-site.xml
- <property>
- <name>dfs.replication</name> #值不应大于datanode数量
- <value>1</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name> #设置分布式文件系统存放于/home/hadooper/hadoop/dfs 的本地目录
- <value>/home/hadooper/hadoop/dfs/name</value>
- <description> </description>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>/home/hadooper/hadoop/dfs/data</value>
- <description> </description>
- </property>
- <property>
- <name>dfs.webhdfs.enabled</name>
- <value>true</value>
- </property>
(3)mapred-site.xml
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>MasterServer:10020</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value>MasterServer:19888</value>
- </property><span style="font-family: Arial, Helvetica, sans-serif;"> </span>
jobhistory是Hadoop自带了一个历史服务器,记录Mapreduce历史作业。默认情况下,jobhistory没有启动,可用以下命令启动:
- sbin/mr-jobhistory-daemon.sh start historyserver
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.resourcemanager.address</name>
- <value>MasterServer:8032</value>
- </property>
- <property>
- <name>yarn.resourcemanager.scheduler.address</name>
- <value>MasterServer:8030</value>
- </property>
- <property>
- <name>yarn.resourcemanager.resource-tracker.address</name>
- <value>MasterServer:8031</value>
- </property>
- <property>
- <name>yarn.resourcemanager.admin.address</name>
- <value>MasterServer:8033</value>
- </property>
- <property>
- <name>yarn.resourcemanager.webapp.address</name>
- <value>MasterServer:8088</value>
- </property>
(5)slaves
- SlaveServer
- export JAVA_HOME=/usr/lib/jvm/jdk1.6/jdk1.6.0_27
5.运行Hadoop
1)格式化
- hdfs namenode –format
- start-dfs.sh
- start-yarn.sh
- start-all.sh
- stop-all.sh
- 7692 ResourceManager
- 8428 JobHistoryServer
- 7348 NameNode
- 14874 Jps
- 7539 SecondaryNameNode
(1)http://192.168.1.67:50070
(2)http://192.168.1.67:8088/
(3)http://192.168.1.67:19888
6. 运行Hadoop自带的wordcount示例
1)建立输入文件:
- echo "My first hadoop example. Hello Hadoop in input. " > input
- hadoop fs -mkdir /user/hadooper
- hadoop fs -put input /user/hadooper
- hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar wordcount /user/hadooper/input /user/hadooper/output
- hadoop fs -cat /user/hadooper/output/part-r-00000
- Hadoop 1
- My 1
- example.Hello 1
- first 1
- hadoop 1
- in 1
- input. 1