hadoop-1.2.1 for CentOS 6.3 64bit

1、环境说明

系统:CentOS release 6.3  64bit

java:Java(TM) SE Runtime Environment (build 1.7.0_40-b43)

hadoop:1.2.1

本次实验包括:1Master3Salve,节点之间局域网连接,可以相互ping通,

节点IP地址分布如下:

192.168.1.102 Master.Hadoop
192.168.1.100 Slave1.Hadoop
192.168.1.101 Slave2.Hadoop
192.168.1.103 Slave3.Hadoop

修改主机名、修改IP地址,这里不做简述。


2、修改/etc/hosts

在/etc/hosts 下添加如下

192.168.1.102 Master.Hadoop
192.168.1.100 Slave1.Hadoop
192.168.1.101 Slave2.Hadoop
192.168.1.103 Slave3.Hadoop


3、软件下载1.4所需软件

1JDK软件

    下载地址:http://www.oracle.com/technetwork/java/javase/index.html

    JDK版本:jdk-7u40-linux-x64.gz

2Hadoop软件

    下载地址:http://hadoop.apache.org/common/releases.html

    Hadoop版本:hadoop-1.2.1.tar.gz

软件下载好。上传到Master.Hadoop 的/soft下


4、ssh互信设置

1)创建用记

useradd hadoop

2)在每台主机上以hadoop用户执行

mkdir~/.ssh

chmod 700~/.ssh

 ssh-keygen -t rsa

3).在主节点以上root用户执行以下操作
ssh Master.Hadoop  cat ~/.ssh/id_rsa.pub>> ./.ssh/authorized_keys

ssh Slave1.Hadoop  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys    

sshSlave2.Hadoop  cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys

sshSlave3.Hadoop  cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys

 chmod 600~/.ssh/authorized_keys

 4)复制到每台主机

scp~/.ssh/authorized_keys  Master.Hadoop:~/.ssh/authorized_keys

scp~/.ssh/authorized_keys  Slave1.Hadoop:~/.ssh/authorized_keys

scp~/.ssh/authorized_keys  Slave2.Hadoop:~/.ssh/authorized_keys

scp~/.ssh/authorized_keys  Slave3.Hadoop:~/.ssh/authorized_keys

5)测试 

ssh Master.Hadoopdate

ssh Slave1.Hadoopdate

ssh Slave2.Hadoopdate

ssh Slave3.Hadoopdate

如果不用密码直接登陆就代表已经成功



5、安装Java

在每一台主机上都安装

1)、以root用户执行

mkdir /usr/java

cd /soft/

gunzip jdk-7u40-linux-x64

tar -xvf  jdk-7u40-linux-x64

mv jdk1.7.0_40 /usr/java/.


2)、配置环境变量

在/etc/profile结尾处加如下

cat>>/etc/profile<<EOF

------------------------------------------------------------------------

# set java environment
export JAVA_HOME=/usr/java/jdk1.7.0_40/
export JRE_HOME=/usr/java/jdk1.7.0_40/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

EOF

--------------------------------------------------------------------------

3)、生效变量

source /etc/profile


4)、验证安装

java -version

java version "1.7.0_40"
Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)


6、安装Master hadoop

将下载好的软件上传至/soft

cd /soft

gunzip hadoop-1.2.1.tar.gz

tar -xvf hadoop-1.2.1.tar

将解压好的目录移动到/usr下,并重命名为hadoop

mv  hadoop-1.2.1 /usr/hadoop

cd /usr/hadoop

mkdir /usr/hadoop/tmp

chown -R hadoop:hadoop hadoop


修改/etc/profile下的hadoop环境量变


cat>>/etc/profile<<EOF

# set hadoop path
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
EOF

source /etc/profile


7、配置hadoop

1)、配置/usr/hadoop/conf/hadoop-env.sh

 在文件的末尾添加下面内容。

# set java environment
export JAVA_HOME=/usr/java/jdk1.7.0_40


2)、配置/usr/hadoop/conf/core-site.xml

模板如下

<configuration>
  <property>
          <name>hadoop.tmp.dir</name>
          <value>/usr/hadoop/tmp</value>
          <description>A base for other temporary directories.</description>
  </property>
  <!-- file system properties -->
   <property>
           <name>fs.default.name</name>
           <value>hdfs://192.168.1.102:9000</value>
           </property>
</configuration>


3)、配置/usr/hadoop/conf/hdfs-site.xml



模板如下

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>


4)、配置/usr/hadoop/conf/mapred-site.xml

模板如下

<configuration>
   <property>
        <name>mapred.job.tracker</name>
        <value>http://192.168.1.102:9001</value>
    </property>
</configuration>


5)、配置/usr/hadoop/conf/masters

模板如下

192.168.1.102


6)、配置/usr/hadoop/conf/slaves

192.168.1.100

192.168.1.101

192.168.1.103


8、安装Slaves hadoop

1)、直接把master的/usr下的hadoop直接复制到Slaves下/usr/目录下

scp -R  /usr/hadoop root@Slave1.Hadoop:/usr/

scp  -R   /usr/hadoop  root @Slave2.Hadoop:/usr/

scp -R  /usr/hadoop root@Slave3.Hadoop:/usr/


Slave1.Hadoop@chown -R hadoop:hadoop hadoop

Slave2.Hadoop@chown -R hadoop:hadoop hadoop

Slave3.Hadoop@chown -R hadoop:hadoop hadoop


2)、包括/etc/profile

修改/etc/profile下的hadoop环境量变


cat>>/etc/profile<<EOF

# set hadoop path
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
EOF

source /etc/profile


9、启动及验证hadoop

1)、格式化hdfs文件系统

Master.Hadoop@hadoop namenode -format


2)、启动hadoop

opping secondarynamenode
[root@Master ~]# start-all.sh 
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /usr/hadoop/libexec/../logs/hadoop-root-namenode-Master.Hadoop.out
192.168.1.103: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-Salve3.Hadoop.out
192.168.1.101: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-Salve2.Hadoop.out
192.168.1.100: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-Salve1.Hadoop.out
192.168.1.102: starting secondarynamenode, logging to /usr/hadoop/libexec/../logs/hadoop-root-secondarynamenode-Master.Hadoop.out
starting jobtracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-jobtracker-Master.Hadoop.out
192.168.1.101: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-Salve2.Hadoop.out
192.168.1.103: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-Salve3.Hadoop.out
192.168.1.100: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-Salve1.Hadoop.out




10、验证hadoop

1)、jps验证

@jps
9176 NameNode
9563 Jps
9323 SecondaryNameNode
9403 JobTracker


2)、hadoop dfsadmin -report 验证

hadoop dfsadmin -report
Warning: $HADOOP_HOME is deprecated.


Configured Capacity: 56238575616 (52.38 GB)
Present Capacity: 43917987840 (40.9 GB)
DFS Remaining: 43917864960 (40.9 GB)
DFS Used: 122880 (120 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0


-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)


Name: 192.168.1.101:50010
Decommission Status : Normal
Configured Capacity: 18746191872 (17.46 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 4097277952 (3.82 GB)
DFS Remaining: 14648872960(13.64 GB)
DFS Used%: 0%
DFS Remaining%: 78.14%
Last contact: Tue Sep 17 03:43:40 PDT 2013




Name: 192.168.1.100:50010
Decommission Status : Normal
Configured Capacity: 18746191872 (17.46 GB)
DFS Used: 40945 (39.99 KB)
Non DFS Used: 4109647887 (3.83 GB)
DFS Remaining: 14636503040(13.63 GB)
DFS Used%: 0%
DFS Remaining%: 78.08%
Last contact: Tue Sep 17 03:43:38 PDT 2013




Name: 192.168.1.103:50010
Decommission Status : Normal
Configured Capacity: 18746191872 (17.46 GB)
DFS Used: 40975 (40.01 KB)
Non DFS Used: 4113661937 (3.83 GB)
DFS Remaining: 14632488960(13.63 GB)
DFS Used%: 0%
DFS Remaining%: 78.06%
Last contact: Tue Sep 17 03:43:40 PDT 2013


11、停止hadoop

stop-all.sh 
Warning: $HADOOP_HOME is deprecated.
stopping jobtracker
192.168.1.103: stopping tasktracker
192.168.1.101: stopping tasktracker
192.168.1.100: stopping tasktracker
stopping namenode
192.168.1.101: no datanode to stop
192.168.1.103: stopping datanode
192.168.1.100: no datanode to stop
192.168.1.102: stopping secondarynamenode



12)网页验证

1)、访问http://192.168.1.102:50030



2)、http://192.168.1.102:50070




13)、站在象杯上说hello

mkdir input

cd input

echo "hello world">test1.txt

echo "hello  hadoop">test2.txt

cd ..

hadoop dfs -put input in


hadoop jar hadoop-1.2.1-examples.jar wordcount in out

hadoop jar  hadoop-examples-1.2.1.jar wordcount in out


hadoop dfs -cat out/*








  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值