hadoop CDH3U5 使用tarball完整安装过程


上次用在线安装的方式, 需要依赖于外部网络, 且等待时间也够长的, 不利于重复部署. 特用tarball的方式重新部署了一下. 牵扯到以前是root用户做的, 本次按要求用非root用户遇到了些权限控制方面的问题, 但好在最后都解决了 


统一说明

 

部署:

ip

Hostname

安装组件

10.0.0.123

Hadoop-master 

-namenode, JobTracker,datanode,taskTracker

-hbase-master,hbase-thrift

-secondarynamenode

-zookeeper-server

10.0.0.125

Hadoop-lave

-datanode,taskTracker

-hbase-regionServer

-zookeeper-server

 

 

 

 

 

 

下载

从https://ccp.cloudera.com/display/SUPPORT/CDH3+Downloadable+Tarballs下载需要的组件

hadoop,hbase,Hive,zookeeper

 

http://archive.cloudera.com/cdh/3/hadoop-0.20.2-cdh3u5.tar.gz

http://archive.cloudera.com/cdh/3/zookeeper-3.3.5-cdh3u5.tar.gz

http://archive.cloudera.com/cdh/3/hive-0.7.1-cdh3u5.tar.gz

http://archive.cloudera.com/cdh/3/hbase-0.90.6-cdh3u5.tar.gz

将压缩包放到/hadoop/cdh3中去.

计划如下

 

目录

所有者

权限

 

/hadoop/cdh3

hadoop

755

Hadoop及其组件的运行环境

/hadoop/data

hadoop

755

见下

/hadoop/data/hdfs

hadoop

700

数据节点存放数据的地方, 后续由hdfs-site.xml中的dfs.data.dir指定

/hadoop/data/storage

hadoop

777

所有上传到Hadoop的文件的存放目录,所以要确保这个目录足够大后续由hadoop.tmp.dir 指定

 

 

用户名

Home

用途

hadoop

/home/hadoop

[1]用于启动停止hadoop等维护

[2] /hadoop/data/hdfs目录的700权限拥有者. 也可以另选用户

 

 

 

 

 

 

 

 

 

 

安装过程

[1]下载JDK

此时选的是jdk1.6.0_43

http://www.oracle.com/technetwork/java/javase/downloads/jdk6downloads-1902814.html
 Linux x64  68.7 MB   jdk-6u43-linux-x64.bin

 

放到/usr/local/share/下并执行 ./ jdk-6u43-linux-x64.bin

然后设置JAVA_HOME及PATH环境变量,注意PATH要增加, 不要覆盖

root@hadoop-master:~# which java

/usr/local/share/jdk1.6.0_43/bin/java

root@hadoop-master:~# echo $JAVA_HOME

/usr/local/share/jdk1.6.0_43

master与slave都要安装, 为了配置方便拷贝, 一定给要一样的目录

 

 

[2]建立hadoop操作用户

root@hadoop-master:/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin# useradd hadoop -m

root@hadoop-master:/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin# su - hadoop

$ bash

hadoop@hadoop-master:~$

hadoop@hadoop-master:~$ pwd

/home/hadoop

hadoop@hadoop-master:~$ ll

total 28

drwxr-xr-x 3 hadoop hadoop 4096 2013-03-07 05:03 ./

drwxr-xr-x 4 root   root   4096 2013-03-07 05:02 ../

-rw-r--r-- 1 hadoop hadoop  220 2011-05-18 03:00 .bash_logout

-rw-r--r-- 1 hadoop hadoop 3353 2011-05-18 03:00 .bashrc

-rw-r--r-- 1 hadoop hadoop  179 2011-06-22 15:51 examples.desktop

-rw-r--r-- 1 hadoop hadoop  675 2011-05-18 03:00 .profile

 

执行ssh授信

hadoop@hadoop-master:~$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Created directory '/home/hadoop/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

17:cc:2b:9c:81:5b:48:53:ee:d6:35:bc:1b:0f:9a:14 hadoop@hadoop-master

The key's randomart image is:

+--[ RSA 2048]----+

|      o..        |

|     . = o .     |

|      o + E +    |

|       = + = o   |

|      . S = +    |

|       . + o =   |

|          o . .  |

|                 |

|                 |

+-----------------+

hadoop@hadoop-master:~$

hadoop@hadoop-master:~$ cd .ssh

hadoop@hadoop-master:~/.ssh$ ll

total 16

drwxr-xr-x 2 hadoop hadoop 4096 2013-03-07 05:04 ./

drwxr-xr-x 3 hadoop hadoop 4096 2013-03-07 05:04 ../

-rw------- 1 hadoop hadoop 1675 2013-03-07 05:04 id_rsa

-rw-r--r-- 1 hadoop hadoop  402 2013-03-07 05:04 id_rsa.pub

hadoop@hadoop-master:~/.ssh$ cat id_rsa.pub  >> authorized_keys

 

 

 

 

在hadoop-slave添加hadoop用户, 用户名要与master相同

 

然后将master的id_rsa.pub 追加到slave机器的/home/hadoop/.ssh/authorized_keys中

 

到此, master应该可以ssh免密码登录slave了

 

[3]安装hadoop-0.20.2-cdh3u5

解压缩:

cd /hadoop/cdh3

tar zxvf hadoop-0.20.2-cdh3u5.tar.gz

修改配置文件

cdh3\hadoop-0.20.2-cdh3u5\conf\core-site.xml

 

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<!--- global properties -->

<property>

  <name>hadoop.tmp.dir</name>

  <value>/hadoop/data/storage</value>

  <description>A directory for other temporary directories.</description>

</property>

<!-- file system properties -->

<property>

  <name>fs.default.name</name>

  <value>hdfs://hadoop-master:8020</value>

</property>

</configuration>

cdh3\hadoop-0.20.2-cdh3u5\conf\hadoop-env.sh

 

将# export JAVA_HOME=/usr/lib/j2sdk1.6-sun

修改为

export JAVA_HOME=/usr/local/share/jdk1.6.0_43

cdh3\hadoop-0.20.2-cdh3u5\conf\hdfs-site.xml

 

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

  <name>dfs.data.dir</name>

  <value>/hadoop/data/hdfs</value>

</property>

<property>

  <name>dfs.replication</name>

  <value>2</value>

</property>

<property>

  <name>dfs.datanode.max.xcievers</name>

  <value>4096</value>

</property>

</configuration>

cdh3\hadoop-0.20.2-cdh3u5\conf\mapred-site.xml

 

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

 

<!-- Put site-specific property overrides in this file. -->

 

<configuration>

<property>

  <name>mapred.job.tracker</name>

  <value>hdfs://hadoop-master:8021</value>

</property>

<property>

  <name>mapred.system.dir</name>

  <value>/mapred/system</value>

</property>

</configuration>

cdh3\hadoop-0.20.2-cdh3u5\conf\masters

 

hadoop-master

cdh3\hadoop-0.20.2-cdh3u5\conf\slaves

 

hadoop-slave

注:如果将hadoop-master也加进来, 那在master机器上也启动一个datanode, 如果机器规划多的话就不要加了,

 

 

Hadoop用户下创建目录

sudo mkdir -p /hadoop/data/storage

sudo mkdir -p /hadoop/data/hdfs

sudo chmod 700 /hadoop/data/hdfs

sudo chown -R hadoop:hadoop /hadoop/data/hdfs

sudo chmod 777 /hadoop/data/storage

sudo chmod o+t /hadoop/data/storage

 

Hadoop用户下执行格式化

hadoop@hadoop-master:~$ hadoop namenode -format

 

 

启动hadoop

hadoop@hadoop-master:~$ cd /hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin

hadoop@hadoop-master:/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin$ ./start-all.sh

starting namenode, logging to /mnt/hgfs/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin/../logs/hadoop-hadoop-namenode-hadoop-master.out

hadoop-slave: starting datanode, logging to /mnt/hgfs/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin/../logs/hadoop-hadoop-datanode-hadoop-slave.out

hadoop-master: starting secondarynamenode, logging to /mnt/hgfs/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop-master.out

starting jobtracker, logging to /mnt/hgfs/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin/../logs/hadoop-hadoop-jobtracker-hadoop-master.out

hadoop-slave: starting tasktracker, logging to /mnt/hgfs/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin/../logs/hadoop-hadoop-tasktracker-hadoop-slave.out

 

查看启动结果

hadoop@hadoop-master:/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin$ jps

5759 SecondaryNameNode

5462 NameNode

5832 JobTracker

5890 Jps

hadoop@hadoop-master:/hadoop/cdh3/hadoop-0.20.2-cdh3u5/bin$

 

 


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值