Hadoop集群化搭建(四)Hadoop-2.6.0安装

软件环境
操作系统CentOS 6.4 64bit (Basic Server + 桌面环境)
虚拟机VMware Workstation 12.0
JDK1.7
1 安装Hadoop-2.6.0
1.1 下载
wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
1.2 解压并安装
mkdir local
mkdir local/opt
tar -zxf hadoop-2.6.0.tar.gz -C ~/local/opt
1.3 配置环境变量
vim ~/.bashrc

末尾追加:

export HADOOP_PREFIX=$HOME/local/opt/hadoop-2.6.0
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
2 文件配置(共6个)
2.1 hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.101.x86_64
2.2 core-site.xml
<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://master</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/local/var/hadoop/tmp/hadoop-${user.name}</value>
</property>
</configuration>
2.3 hdfs-site.xml
<configuration>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///home/hadoop/local/var/hadoop/hdfs/datanode</value> 
</property>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///home/hadoop/local/var/hadoop/hdfs/namenode</value> 
</property>
<property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:///home/hadoop/local/var/hadoop/hdfs/namesecondry</value>
</property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>
</configuration>
2.4 yarn-site.xml
<configuration>
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce.shuffle</value>
</property>
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>master</value>
</property>
</configuration>
2.5 mapred-site.xml
<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>mapreduce.jobtracker.staging.root.dir</name>
    <value>/user</value>
</property>
</configuration>
2.6 slaves
master
slave1
slave2
3 64位Native Library替换
3.1 下载64位Native Library
wget http://dl.bintray.com/sequenceiq/sequenceiq-bin/hadoop-native-64-2.6.0.tar
3.2 解压Native Library,并覆盖hadoop-2.6.0/lib/native中文件
4 启动Hadoop
4.1 格式化HDFS
hdfs namenode -format
4.2 启动Hadoop集群

拷贝local文件到slave1和slave2并各自进行.bashrc配置

start-dfs.sh
start-yarn.sh
4.3 验证
jps

如果一切顺利,将可以看到:
master:
xxxx NameNode
xxxx DataNode
xxxx SecondaryNameNode
xxxx Jps
slave1:
xxxx DataNode
xxxx Jps
slave2:
xxxx DataNode
xxxx Jps

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值