编译运行HBase源码,安装hadoop集群

node1:namenode, datanode, jobtracker, tasktracker,zookeeper, hmaster, hregionserver

node2:datanode, tasktracker, hregionserver

 

Install maven, edit /etc/profile:

export M2_HOME=/home/apache-maven-3.1.1

export PATH=$M2_HOME/bin:$PATH

 

source /etc/profile

 

Edit /etc/hosts

192.168.20.24 node1

192.168.20.98 node2

 

Configuring key baased login:

ssh-keygen –t rsa

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

chmod 0600 ~/.ssh/authorized_keys

scp ~/.ssh/authorized_keys root@node2:/root/.ssh/

 

1.   Install Hadoop

Download Hadoop:

http://www.apache.org/dyn/closer.cgi/hadoop/common/

 

Edit conf/core-site.xml

<configuration>

 

<property>

    <name>fs.default.name</name>

    <value>hdfs://node1:9000</value>

</property>

<property>

    <name>dfs.permissions</name>

    <value>false</value>

</property>

 

</configuration>

 

Edit conf/hdfs-site.xml

<configuration>

 

<property>

        <name>dfs.data.dir</name>

        <value>/opt/hadoop/hadoop/dfs/name/data</value>

        <final>true</final>

</property>

<property>

        <name>dfs.name.dir</name>

        <value>/opt/hadoop/hadoop/dfs/name</value>

        <final>true</final>

</property>

<property>

        <name>dfs.replication</name>

        <value>2</value>

</property>

 

</configuration>

 

Edit conf/mapred-site.xml

<configuration>

 

<property>

        <name>mapred.job.tracker</name>

        <value>node2:9001</value>

</property>

 

</configuration>

 

Edit conf/slaves

node1

node2

 

Edit conf/master

node1

 

Edit conf/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_21

 

Hadoop format:

bin/hadoop namenode -format

 

copy the code to node2:

scp -r hadoop-1.2.1/ root@node2:/opt/

 

 

start hadoop:

bin/start-all.sh

 

2. Compile and Install HBase

Download HBase

git clone https://github.com/apache/hbase.git

 

Edit conf/hbase-site.xml

<configuration>

 <property>

    <name>hbase.rootdir</name>

    <value>hdfs://node1:9000/hbase</value>

 </property>

 <property>

    <name>hbase.cluster.distributed</name>

    <value>true</value>

 </property>

 <property>

    <name>hbase.zookeeper.quorum</name>

    <value>node1</value>

 </property>

 <property>

   <name>hbase.assignment.maximum.attempts</name>

   <value>1</value>

 </property>

</configuration>

 

Edit conf/hbase-env.sh

export HBASE_MANAGES_ZK=true

export JAVA_HOME=/usr/java/jdk1.7.0_21

 

Edit conf/regionservers

node1

node2

 

Build hbase:

mvn clean package -DskipTests -Dhadoop.profile=1.1 -Dhadoop.version=1.2.1

 

copy the code to node2:

scp -r hbase-trunk/ root@node2:/opt/

 

start hbase:

bin/start-hbase.sh

 

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值