搭建Hadoop集群
安装SSH并设置无密码登录
- 查看有没有ssh: rpm -qa | grep ssh
- 没有则安装: yum install openssh-server
- SSH命令需要安装客户端:yum -y install openssh-clients
- 启动ssh:service sshd restart
- 设置开机启动:chkconfig sshd on
- 生成秘钥: ssh-keygen
- 无密码登录:拷贝公钥到共享文件夹
cat ~/.ssh/id_rsa.pub > /mnt/hgfs/share/master.pub) - 无密码登录:添加信任关系
cat /mnt/hgfs/share/master.pub /mnt/hgfs/share/slave1.pub /mnt/hgfs/share/slave2.pub > ~/.ssh/authorized_keys
此时可以实现 ssh root@192.168.208.100 或者 ssh 192.168.208.100
需要用host而不是IP登录的话:
(1). 修改本主机名称:vi /etc/sysconfig/network 修改hostname
(2). 添加需要识别的主机:vi /etc/hosts 例如:192.168.208.100 master
安装JDK
将jdk-7u79-linux-x64.tar.gz拷贝到本机的共享目录下D:\VMStation\share
在服务器上:
解压:tar -zxvf /mnt/hgfs/share/jdk-7u79-linux-x64.tar.gz -C /usr/
添加环境变量: vim /etc/profile
在最后添加
# set java environment
export JAVA_HOME=/usr/jdk1.7.0_79
export JRE_HOME=/usr/jdk1.7.0_79/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
生效: source /etc/profile
验证: java -version
安装Hadoop(先安装master,再复制到slave)
在Hadoop官网下载Hadoop安装包 hadoop-2.6.2.tar.gz 放到本机共享路径下
解压:tar -zxvf /mnt/hgfs/share/hadoop-2.6.2.tar.gz -C /usr/
添加环境变量: vim /etc/profile
在最后添加
# set hadoop environment
export HADOOP_HOME=/usr/hadoop-2.6.2
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
生效:source /etc/profile
配置Hadoop (cd /usr/hadoop-2.6.2)
1、vi core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop-2.6.2/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>
2、vi hadoop-env.sh 在开头加上 export JAVA_HOME=/usr/jdk1.7.0_79
3、vi yarn-env.sh 在开头加上 export JAVA_HOME=/usr/jdk1.7.0_79
4、vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/hadoop-2.6.2/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/hadoop-2.6.2/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
5、vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>master:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>http://master.Hadoop:9001</value>
</property>
</configuration>
6、vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
7、vi slaves
slave1
slave2
8、复制到slave上 scp -r /usr/hadoop-2.6.2/ root@slave1:/usr/
9、配置slave上的Hadoop环境变量
添加环境变量: vim /etc/profile
在最后添加
# set hadoop environment
export HADOOP_HOME=/usr/hadoop-2.6.2
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
生效:source /etc/profile
10、关闭防火墙 chkconfig iptables off
启动Hadoop
格式化namenode:hadoop namenode -format
启动Hadoop:start-all.sh
查看各机器进程:jps
查看Hadoop状态
查看nameNode状态: http://master的IP:50070
查看ResourceManager状态: http://master的IP:8088
如果在本机不能打开虚拟机IP可以尝试重启虚拟机
如果想用master主机名代替IP,可以在本机的 C:\Windows\System32\drivers\etc\hosts 中添加相应的映射关系