安装hadoop2.6.1集群

1. 下载64位hadoop

wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.6.1/hadoop-2.6.1.tar.gz
tar zxvf hadoop-2.6.1.tar.gz
file hadoop-2.6.1/lib/native/libhadoop.so.1.0.0


2. 安装JDK
http://blog.csdn.net/u013619834/article/details/38894649


3.修改主机名和host文件
vim /etc/hosts
192.168.20.221  master1
192.168.20.223  slave1
192.168.20.224  slave2
192.168.20.225  slave3
vim /etc/sysconfig/network
分别修改主机名


4. 创建hadoop用户
useradd hadoop
echo "hadooppwd" | passwd --stdin hadoop

5. 在master1设置秘钥登录
su - hadoop
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@master1
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@slave1
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@slave2
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@slave3


6. 修改配置文件
把编译好的hadoop的native目录替换
rm -rf /home/hadoop/hadoop-2.6.1/lib/native
cp -r lib/native /home/hadoop/hadoop-2.6.1/lib

tar zxvf hadoop-2.6.1.tar.gz
mv hadoop-2.6.1 /home/hadoop
chown -R hadoop.hadoop /home/hadoop/hadoop-2.6.1
cd /home/hadoop/hadoop-2.6.1/etc/hadoop


vim slaves
添加

slave1
slave2
slave3


vim core-site.xml
添加
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop</value>
    </property>
</configuration>


vim hdfs-site.xml
添加
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>

cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
添加
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>


vim yarn-site.xml
添加
<?xml version="1.0"?>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>



7. 复制hadoop项目目录到所有其他节点
scp -r hadoop-2.6.1 hadoop@slave1:~
scp -r hadoop-2.6.1 hadoop@slave2:~
scp -r hadoop-2.6.1 hadoop@slave3:~


8. 回到root用户创建目录
mkdir -p /data/hadoop
chown -R hadoop.hadoop /data/hadoop
chown -R hadoop.hadoop /home/hadoop/hadoop-2.6.1
su - hadoop

9. 常用命令
格式化namenode
/home/hadoop/hadoop-2.6.1/bin/hdfs namenode -format

启动hadoop
/home/hadoop/hadoop-2.6.1/sbin/start-all.sh

停止hadoop
/home/hadoop/hadoop-2.6.1/sbin/stop-all.sh



















1. 安装JDK

http://blog.csdn.net/u013619834/article/details/38894649

2. 安装zookeeper集群(这里使用192.168.1.121 192.168.1.122 192.168.1.123)
http://blog.csdn.net/u013619834/article/details/41316957


3. 修改主机名和host文件

vim /etc/hosts
192.168.1.111 HMaster1
192.168.1.112 HMaster2
192.168.1.121 HSlave1
192.168.1.122 HSlave2
192.168.1.123 HSlave3

vim /etc/sysconfig/network

分别修改主机名



4. 创建hadoop用户
useradd hadoop
echo "hadooppwd" | passwd --stdin hadoop

5. 在HMastder1和HMaster2设置密钥登陆
su - hadoop
ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.1.111
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.1.112
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.1.121
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.1.122
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.1.123
scp id_rsa hadoop@192.168.1.112:~/.ssh/id_rsa
在HMaster1和HMaster2上面分别ssh登陆其他的服务器金测试密钥登陆


6.这一步还是有问题,还是需要下载源代码并编译
 
下载hadoop(http://hadoop.apache.org/releases.html),从2.5开始官方提供编译好的64位hadoop下载
su - hadoop
cd ~
wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
tar zxvf hadoop-2.6.0.tar.gz
查看是否64位
file hadoop-2.6.0/lib/native/libhadoop.so.1.0.0

7. 修改配置文件
cd hadoop-2.6.0/etc/hadoop
vim core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <!--HDFS路径逻辑名称,需与hdfs-site.xml中的dfs.nameservices-->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hcluster</value>
    </property>

    <!--Hadoop存放数据的目录-->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop</value>
    </property>

    <!--使用的zookeeper集群地址-->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>192.168.1.121:2181,192.168.1.122:2181,192.168.1.123:2181</value>
    </property>
</configuration>


vim hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <!-- 需要和core-site.xml中的fs.defaultFS配置同样的名称 -->
    <property>
        <name>dfs.nameservices</name>
        <value>hcluster</value>
    </property>

    <!--NameNode地址集群标识(hcluster),最多两个-->
    <property>
        <name>dfs.ha.namenodes.hcluster</name>
        <value>HMaster1,HMaster2</value>
    </property>

    <!--HDFS文件系统数据存储位置,可以分别保存到不同硬盘,突破单硬盘性能瓶颈,多个位置以逗号隔开-->
    <property>
        <name>dfs.data.dir</name>
        <value>/data/hadoop/hdfs/data</value>
    </property>

    <!--数据副本数量,根据HDFS台数设置,默认3份-->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>

    <!--每个namenode监听的RPC地址-->
    <property>
        <name>dfs.namenode.rpc-address.hcluster.HMaster1</name>
        <value>HMaster1:9000</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.hcluster.HMaster2</name>
        <value>HMaster2:9000</value>
    </property>

    <!--NameNode HTTP访问地址-->
    <property>
        <name>dfs.namenode.http-address.hcluster.HMaster1</name>
        <value>HMaster1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.hcluster.HMaster2</name>
        <value>HMaster2:50070</value>
    </property>

    <!--NN存放元数据和日志位置-->
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/hadoop/hdfs/name</value>
    </property>

    <!--这是NameNode读写JNs组的uri-->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://HSlave1:8485;HSlave1:8485;HSlave3:8485/hcluster</value>
    </property>

    <!--设置journalnode节点保存本地状态的目录-->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/hadoop/dfs/journal</value>
    </property>

    <!--开启NameNode失败自动切换-->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>

    <!--NameNode失败自动切换实现方式-->
    <property>
        <name>dfs.client.failover.proxy.provider.hcluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

    <!--隔离机制方法,确保任何时间只有一个NameNode处于活动状态-->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>

    <!--使用sshfence隔离机制要SSH免密码认证-->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
</configuration>



vim yarn-site.xml
<?xml version="1.0"?>

<configuration>
    <!--启用RM高可用-->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>

    <!--RM集群标识符-->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarn-cluster</value>
    </property>

    <!--指定两台RM主机名标识符-->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>

    <!--RM主机1-->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>HMaster1</value>
    </property>
    <!--RM主机2-->
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>HMaster2</value>
    </property>

    <!--RM故障自动切换-->
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.recover.enabled</name>
        <value>true</value>
    </property>

    <!--RM故障自动恢复-->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>

    <!--RM状态信息存储方式-->
    <property>
       <name>yarn.resourcemanager.store.class</name>
       <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>

    <!--使用ZK集群保存状态信息-->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>192.168.1.121:2181,192.168.1.122:2181,192.168.1.123:2181</value>
    </property>

    <!--向RM调度资源地址-->
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>HMaster1:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>HMaster2:8030</value>
    </property>

    <!--NodeManager通过该地址交换信息-->
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
        <value>HMaster1:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
        <value>HMaster2:8031</value>
    </property>

    <!--客户端通过该地址向RM提交对应用程序操作-->
    <property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>HMaster1:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>HMaster2:8032</value>
    </property>
   
    <!--管理员通过该地址向RM发送管理命令-->
    <property>
        <name>yarn.resourcemanager.admin.address.rm1</name>
        <value>HMaster1:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address.rm2</name>
        <value>HMaster2:8033</value>
    </property>

    <!--RM HTTP访问地址,查看集群信息-->
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>HMaster1:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>HMaster2:8088</value>
    </property>
</configuration> 


vim mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <!--指定MR框架为YARN-->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <!-- 配置 MapReduce JobHistory Server地址 ,默认端口10020 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>0.0.0.0:10020</value>
    </property>
    <!-- 配置 MapReduce JobHistory Server HTTP地址, 默认端口19888 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>0.0.0.0:19888</value>
    </property>
</configuration>


vim hadoop-env.sh

修改

export JAVA_HOME=/usr/java/jdk1.7.0_79


vim slaves
HSlave1
HSlave2
HSlave3


8. 回到root用户创建目录
mkdir -p /data/hadoop
chown -R hadoop.hadoop /data/hadoop

9. 复制hadoop项目目录到所有其他节点
scp -r hadoop-2.6.0 hadoop@192.168.1.112:~
scp -r hadoop-2.6.0 hadoop@192.168.1.121:~
scp -r hadoop-2.6.0 hadoop@192.168.1.122:~
scp -r hadoop-2.6.0 hadoop@192.168.1.123:~


10. 常用命令
对NameNode(HMaster1)节点进行格式化
cd /home/hadoop/hadoop-2.6.0/bin
./hdfs namenode -format
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值