ubuntu8.04下hadoop 0202单机搭建

ubuntu8.04下hadoop 0202单机搭建
原文http://blog.csdn.net/laysom/archive/2010/10/04/5920903.aspx
一.所需软件
1.jdk
2.ssh
3.hadoop

下面操作都在root用户下完成操作
二,jdk的安装与设置
1.安装
$apt-get install sun-java6-jdk sun-java6-plugin
$update-java-alternatives -s java-6-sun
2.设置
$gedit /etc/profile
设置环境变量
# set java environment
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export JRE_HOME=/usr/lib/jvm/java-6-sun/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
并执行以下命令使配置生效
chmod +x /etc/profile ;增加执行权限
source /etc/profile ;

3.若源中没有jdk则可以
$add-apt-repository "deb http://archive.canonical.com/ lucid partner"
$apt-get update

接着执行第1部

三.ssh的安装与配置
$apt-get install ssh
免密码设置
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys


四hadoop的安装与配置使用
1.下载
http://labs.renren.com/apache-mirror//hadoop/core/

2.解压(将其解压到当前用户下)
$cd ~
$tar zxvf hadoop-0.20.2.tar.gz
$cd hadoop-0.20.2


3.设定环境变量
$gedit conf/hadoop-env.sh
添加# set java environment
export JAVA_HOME=/usr/lib/jvm/java-6-sun
4.配置配置文件
$gedit conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>


$gedit conf/hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

$gedit conf/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>


5.运行wordcount实例
(1)格式化HDFS
$bin/hadoop namenode -format
格式化执行信息如下所示:
10/08/01 19:04:02 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.19.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.19 -r 713890; compiled by 'ndaley' on Fri Nov 14 03:12:29 UTC 2008
************************************************************/
Re-format filesystem in /tmp/hadoop-root/dfs/name ? (Y or N) y
Format aborted in /tmp/hadoop-root/dfs/name
10/08/01 19:04:05 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/

(2)启动Hadoop相关后台进程
$bin/start-all.sh
starting namenode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-namenode-localhost.out
localhost: starting datanode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-datanode-localhost.out
localhost: starting secondarynamenode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-secondarynamenode-localhost.out
starting jobtracker, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-jobtracker-localhost.out
localhost: starting tasktracker, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-tasktracker-localhost.out

(3)准备执行wordcount任务的数据
$ cd hadoop-0.20.2
$ mkdir test-txt
$ cd test-txt
$ echo "hello world, bye , world." >file1.txt
$ echo "hello hadoop, goodbye , hadoop" >file2.txt
$ cd ..
$ bin/hadoop dfs -put ./test-txt input

#将本地文件系统上的./test-txt目录拷到 HDFS 的根目录上,目录名改为 input
#执行 bin/hadoop dfs –help 可以学习各种 HDFS 命令的使用。
$ bin/hadoop jar hadoop-0.20.2-examples.jar wordcount input output


#查看执行结果:
#将文件从 HDFS 拷到本地文件系统中再查看:
$ bin/hadoop dfs -get output output
$ cat output/*


异常分析(主要是日志文件分析)

在进行上述实践过程中,可能会遇到某种异常情况,大致分析如下:

1、Call to localhost/127.0.0.1:9000 failed on local exception异常

(1)异常描述

可能你会在执行如下命令行的时候出现:

# bin/hadoop jar hadoop-0.19.0-examples.jar wordcount input output

出错异常信息如下所示:
10/08/01 19:50:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
10/08/01 19:50:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
10/08/01 19:50:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
10/08/01 19:50:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
10/08/01 19:50:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
10/08/01 19:51:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
10/08/01 19:51:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
10/08/01 19:51:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
10/08/01 19:51:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
10/08/01 19:51:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
java.lang.RuntimeException: java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: Connection refused
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:323)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:295)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:268)
at org.apache.hadoop.examples.WordCount.run(WordCount.java:146)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:141)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:165)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
Caused by: java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: Connection refused
at org.apache.hadoop.ipc.Client.call(Client.java:699)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:104)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:74)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:319)
... 21 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:299)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:685)
... 33 more

(2)异常分析

从上述异常信息分析,这句是关键:

Retrying connect to server: localhost/127.0.0.1:9000.

是说在尝试10次连接到“server”时都无法成功,这就说明到server的通信链路是不通的。我们已经在hadoop-site.xml中配置了namenode结点的值,如下所示:

view plaincopy to clipboardprint?

1. <property>
2. <name>fs.default.name</name>
3. <value>hdfs://localhost:9000</value>
4. </property>

<property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property>

所以,敢肯定是无法连接到server,也就是很可能namenode进程根本就没有启动,更不必谈要执行任务了。

上述异常,我模拟的过程是:

格式化了HDFS,但是没有执行bin/start-all.sh,直接启动wordcount任务,就出现上述异常。

所以,应该执行bin/start-all.sh以后再启动wordcount任务。

2、Input path does not exist异常

(1)异常描述

当你在当前hadoop目录下面创建一个input目录,并cp某些文件到里面,开始执行:

# bin/hadoop namenode -format

# bin/start-all.sh

这时候,你认为input已经存在,应该可以执行wordcount任务了:

# bin/hadoop jar hadoop-0.19.0-examples.jar wordcount input output

结果抛出一堆异常,信息如下:

org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/root/input
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:179)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:190)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:782)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1127)
at org.apache.hadoop.examples.WordCount.run(WordCount.java:149)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:141)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:165)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)

上述异常,我模拟的过程是:

# bin/hadoop fs -rmr input
Deleted hdfs://localhost:9000/user/root/input

# bin/hadoop fs -rmr output
Deleted hdfs://localhost:9000/user/root/output

因为之前我已经成功执行过一次。

(2)异常分析

应该不用多说了,是因为本地的input目录并没有上传到HDFS上,所出现 org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/root/input

在我的印象中,好像使用hadoop-0.16.4的时候,只要input目录存在,是不用执行上传命令,就可以运行的,后期的版本是不行的。

只需要执行上传的命令即可:

# bin/hadoop fs -put input/ input
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值