- 准备
-
(一)软件准备
1.hadoop-2.7.3.tar.gz
2.jdk-8u91-linux-x64.rpm -
(二)环境准备
1.已经配置好网络的centos7虚拟机,详情见Virtual Box配置CentOS7网络
个人建议主节点最少2G内存和20G磁盘,子节点最少1G内存10G磁盘,(主要看你集群需要运行的组件有哪些)
- 安装步骤
-
ssh免密配置
在每台机子上生成公钥:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
在给自己发送一份:
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
在主节点上将主节点的公钥拷贝到各个子节点上:
scp ~/.ssh/id_dsa.pub root@slave2:~/
在各个子节点上将拷贝来的公钥发给自己:
cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
-
安装jdk
rpm -ivh jdk-8u91-linux-x64.rpm
安装后JAVA_HOME=/usr/java/default -
安装hadoop
解压:tar zxvf hadoop-2.7.3.tar.gz
修改文件名:mv hadoop-2.7.3 hadoop
-
配置hadoop集群
创建文件:
创建hadoop数据文件的目录:mkdir /home/hadoopdir
创建储存临时文件:mkdir /home/hadoopdir/tmp
创建dfs系统使用的dfs系统名称:mkdir /home/hadoopdir/dfs/name
创建dfs系统使用的数据文件hdfs-site.xml文件使用:mkdir /home/hadoopdir/dfs/data
配置环境变量
vim /etc/profile
在profile文件尾端加上
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=${HADOOP_INSTALL}/bin:${HADOOP_INSTALL}/sbin${PATH}
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
保存退出后执行:source /etc/profile
以下步骤进入到目录:cd /usr/local/hadoop/etc/hadoop
修改core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name> <value>hdfs://master:9000</value>
</property>
<property> <name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name> <value>file:/home/hadoopdir/tmp/</value>
<description>A base for other temporary directories.</description>
</property>
<property> <name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property> <name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>
修改hafs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoopdir/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoopdir/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
修改mapred-site.xml
文件mapred-site.xml是没有的,需要复制mapred-site.xml.template为mapred-site.xml
先复制:cp mapred-site.xml.template mapred-site.xml
然后进行修改:vim mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>Master:19888</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>Master:50030</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>Master:9001</value>
</property>
</configuration>
修改yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
修改core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoopdir/tmp/</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
</configuration>
修改slaves文件
slave1
slave2
格式化集群hadoop namenode -format
发送dfs内容给slave1:scp -r /home/hadoopdir/dfs/* Slave1:/home/hadoopdir/dfs
发送dfs内容给slave2:scp -r /home/hadoopdir/dfs/* Slave2:/home/hadoopdir/dfs
启动集群:
./sbin/start-all.sh