redhat linux rehl 5 hadoop 单机[伪分布式]安装

redhat linux rehl 5 hadoop 单机[伪分布式]安装
参考博客:
http://freewxy.iteye.com/blog/1027569
下载 解压 修改权限等暂时跳过.
ssh 免登录
[code="java"]
# ssh-keygen -t rsa P ''
root@gen's password:
Permission denied, please try again.
root@gen's password:
Permission denied, please try again.
root@gen's password:
Permission denied (publickey,gssapi-with-mic,password).
# vi /etc/ssh/sshd
//这个有错误 有时间在写.

//切换到hadoop目录
# cd /usr/local/hadoop
# ll
drwxrwxr-x 2 hehaibo hehaibo 4096 2010-08-17 bin
drwxrwxr-x 5 hehaibo hehaibo 4096 2010-08-17 c++
drwxr-xr-x 8 hehaibo hehaibo 4096 2010-08-17 common
drwxrwxr-x 2 hehaibo hehaibo 4096 2010-08-17 conf
-rw-rw-r-- 1 hehaibo hehaibo 1289953 2010-08-17 hadoop-common-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 622276 2010-08-17 hadoop-common-test-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 934881 2010-08-17 hadoop-hdfs-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 613332 2010-08-17 hadoop-hdfs-0.21.0-sources.jar
-rw-rw-r-- 1 hehaibo hehaibo 6956 2010-08-17 hadoop-hdfs-ant-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 688026 2010-08-17 hadoop-hdfs-test-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 419671 2010-08-17 hadoop-hdfs-test-0.21.0-sources.jar
-rw-rw-r-- 1 hehaibo hehaibo 1747897 2010-08-17 hadoop-mapred-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 1182309 2010-08-17 hadoop-mapred-0.21.0-sources.jar
-rw-rw-r-- 1 hehaibo hehaibo 252064 2010-08-17 hadoop-mapred-examples-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 1492025 2010-08-17 hadoop-mapred-test-0.21.0.jar
-rw-rw-r-- 1 hehaibo hehaibo 298837 2010-08-17 hadoop-mapred-tools-0.21.0.jar
drwxr-xr-x 8 hehaibo hehaibo 4096 2010-08-17 hdfs
drwxrwxr-x 4 hehaibo hehaibo 4096 2010-08-17 lib
-rw-rw-r-- 1 hehaibo hehaibo 13366 2010-08-17 LICENSE.txt
drwxr-xr-x 9 hehaibo hehaibo 4096 2010-08-17 mapred
-rw-rw-r-- 1 hehaibo hehaibo 101 2010-08-17 NOTICE.txt
-rw-rw-r-- 1 hehaibo hehaibo 1366 2010-08-17 README.txt
drwxrwxr-x 8 hehaibo hehaibo 4096 2010-08-17 webapps
# vi conf/hadoop-env.sh
找到JAVA_HOME,设置Java环境变量
JAVA_HOME=/usr/local/jdk1.6.0_19
# vi conf/core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/root/tmp</value>
</property>
</configuration>

# vi conf/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
# cd bin/
//查看bin目录下的内容
# ll
总计 160
-rwxr-xr-x 1 hehaibo hehaibo 4131 2010-08-17 hadoop
-rwxr-xr-x 1 hehaibo hehaibo 8658 2010-08-17 hadoop-config.sh
-rwxr-xr-x 1 hehaibo hehaibo 3841 2010-08-17 hadoop-daemon.sh
-rwxr-xr-x 1 hehaibo hehaibo 1242 2010-08-17 hadoop-daemons.sh
-rwxr-xr-x 1 hehaibo hehaibo 4130 2010-08-17 hdfs
-rwxr-xr-x 1 hehaibo hehaibo 1201 2010-08-17 hdfs-config.sh
-rwxr-xr-x 1 hehaibo hehaibo 3387 2010-08-17 mapred
-rwxr-xr-x 1 hehaibo hehaibo 1207 2010-08-17 mapred-config.sh
-rwxr-xr-x 1 hehaibo hehaibo 2720 2010-08-17 rcc
-rwxr-xr-x 1 hehaibo hehaibo 2058 2010-08-17 slaves.sh
-rwxr-xr-x 1 hehaibo hehaibo 1367 2010-08-17 start-all.sh
-rwxr-xr-x 1 hehaibo hehaibo 1018 2010-08-17 start-balancer.sh
-rwxr-xr-x 1 hehaibo hehaibo 1778 2010-08-17 start-dfs.sh
-rwxr-xr-x 1 hehaibo hehaibo 1255 2010-08-17 start-mapred.sh
-rwxr-xr-x 1 hehaibo hehaibo 1359 2010-08-17 stop-all.sh
-rwxr-xr-x 1 hehaibo hehaibo 1069 2010-08-17 stop-balancer.sh
-rwxr-xr-x 1 hehaibo hehaibo 1277 2010-08-17 stop-dfs.sh
-rwxr-xr-x 1 hehaibo hehaibo 1163 2010-08-17 stop-mapred.sh

//格式化namenode[为什么要格式化,有待研究]
# sh hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

11/08/13 10:56:18 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = oplinux.hehaibo.com/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.21.0
STARTUP_MSG: classpath = /usr/local/hadoop/bin/../conf:/usr/local/jdk1.6.0_19/lib/tools.jar:/usr/local/hadoop/bin/..:/usr/local/hadoop/bin/../hadoop-common-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-common-test-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-hdfs-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-hdfs-0.21.0-sources.jar:/usr/local/hadoop/bin/../hadoop-hdfs-ant-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-hdfs-test-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-hdfs-test-0.21.0-sources.jar:/usr/local/hadoop/bin/../hadoop-mapred-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-mapred-0.21.0-sources.jar:/usr/local/hadoop/bin/../hadoop-mapred-examples-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-mapred-test-0.21.0.jar:/usr/local/hadoop/bin/../hadoop-mapred-tools-0.21.0.jar:/usr/local/hadoop/bin/../lib/ant-1.6.5.jar:/usr/local/hadoop/bin/../lib/asm-3.2.jar:/usr/local/hadoop/bin/../lib/aspectjrt-1.6.5.jar:/usr/local/hadoop/bin/../lib/aspectjtools-1.6.5.jar:/usr/local/hadoop/bin/../lib/avro-1.3.2.jar:/usr/local/hadoop/bin/../lib/commons-cli-1.2.jar:/usr/local/hadoop/bin/../lib/commons-codec-1.4.jar:/usr/local/hadoop/bin/../lib/commons-el-1.0.jar:/usr/local/hadoop/bin/../lib/commons-httpclient-3.1.jar:/usr/local/hadoop/bin/../lib/commons-lang-2.5.jar:/usr/local/hadoop/bin/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop/bin/../lib/commons-logging-api-1.1.jar:/usr/local/hadoop/bin/../lib/commons-net-1.4.1.jar:/usr/local/hadoop/bin/../lib/core-3.1.1.jar:/usr/local/hadoop/bin/../lib/ftplet-api-1.0.0.jar:/usr/local/hadoop/bin/../lib/ftpserver-core-1.0.0.jar:/usr/local/hadoop/bin/../lib/ftpserver-deprecated-1.0.0-M2.jar:/usr/local/hadoop/bin/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/bin/../lib/jackson-core-asl-1.4.2.jar:/usr/local/hadoop/bin/../lib/jackson-mapper-asl-1.4.2.jar:/usr/local/hadoop/bin/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/bin/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/bin/../lib/jdiff-1.0.9.jar:/usr/local/hadoop/bin/../lib/jets3t-0.7.1.jar:/usr/local/hadoop/bin/../lib/jetty-6.1.14.jar:/usr/local/hadoop/bin/../lib/jetty-util-6.1.14.jar:/usr/local/hadoop/bin/../lib/jsp-2.1-6.1.14.jar:/usr/local/hadoop/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/local/hadoop/bin/../lib/junit-4.8.1.jar:/usr/local/hadoop/bin/../lib/kfs-0.3.jar:/usr/local/hadoop/bin/../lib/log4j-1.2.15.jar:/usr/local/hadoop/bin/../lib/mina-core-2.0.0-M5.jar:/usr/local/hadoop/bin/../lib/mockito-all-1.8.2.jar:/usr/local/hadoop/bin/../lib/oro-2.0.8.jar:/usr/local/hadoop/bin/../lib/paranamer-2.2.jar:/usr/local/hadoop/bin/../lib/paranamer-ant-2.2.jar:/usr/local/hadoop/bin/../lib/paranamer-generator-2.2.jar:/usr/local/hadoop/bin/../lib/qdox-1.10.1.jar:/usr/local/hadoop/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.5.11.jar:/usr/local/hadoop/bin/../lib/slf4j-log4j12-1.5.11.jar:/usr/local/hadoop/bin/../lib/xmlenc-0.52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/*.jar:/usr/local/hadoop/hdfs/bin/../conf:/usr/local/hadoop/hdfs/bin/../hadoop-hdfs-*.jar:/usr/local/hadoop/hdfs/bin/../lib/*.jar:/usr/local/hadoop/bin/../mapred/conf:/usr/local/hadoop/bin/../mapred/hadoop-mapred-*.jar:/usr/local/hadoop/bin/../mapred/lib/*.jar:/usr/local/hadoop/hdfs/bin/../hadoop-hdfs-*.jar:/usr/local/hadoop/hdfs/bin/../lib/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
************************************************************/
11/08/13 10:56:19 INFO namenode.FSNamesystem: defaultReplication = 3
11/08/13 10:56:19 INFO namenode.FSNamesystem: maxReplication = 512
11/08/13 10:56:19 INFO namenode.FSNamesystem: minReplication = 1
11/08/13 10:56:19 INFO namenode.FSNamesystem: maxReplicationStreams = 2
11/08/13 10:56:19 INFO namenode.FSNamesystem: shouldCheckForEnoughRacks = false
11/08/13 10:56:19 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 10:56:20 INFO namenode.FSNamesystem: fsOwner=root
11/08/13 10:56:20 INFO namenode.FSNamesystem: supergroup=supergroup
11/08/13 10:56:20 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/08/13 10:56:20 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/08/13 10:56:21 INFO common.Storage: Image file of size 110 saved in 0 seconds.
11/08/13 10:56:21 INFO common.Storage: Storage directory /home/root/tmp/dfs/name has been successfully formatted.
11/08/13 10:56:21 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at oplinux.hehaibo.com/127.0.0.1
************************************************************/

//停止hadoop
# sh stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-mapred.sh
no namenode to stop
localhost: no datanode to stop
localhost: no secondarynamenode to stop
no jobtracker to stop
localhost: no tasktracker to stop
# sh start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-namenode-oplinux.hehaibo.com.out
localhost: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-datanode-oplinux.hehaibo.com.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-secondarynamenode-oplinux.hehaibo.com.out
starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-root-jobtracker-oplinux.hehaibo.com.out
localhost: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-root-tasktracker-oplinux.hehaibo.com.out
#
//启动hadoop
# sh start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-namenode-oplinux.hehaibo.com.out
localhost: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-datanode-oplinux.hehaibo.com.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-secondarynamenode-oplinux.hehaibo.com.out
starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-root-jobtracker-oplinux.hehaibo.com.out
localhost: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-root-tasktracker-oplinux.hehaibo.com.out
//验证启动是否成功
# jps
5333 TaskTracker
6035 Jps
4801 NameNode
5190 JobTracker
5078 SecondaryNameNode
4940 DataNode
# jps
5333 TaskTracker
6068 Jps
4801 NameNode
5190 JobTracker
5078 SecondaryNameNode
4940 DataNode

//创建一份测试文件 如:/home/root/hadooptest.txt
//我的dfs文件目录为/user/root [linux目录上不会存在,这个应该是hadoop的文件目录]
/**
* dfs 文件操作命令
//创建目录
dfs -mkdir firsttest //实际上会在hadoop /user/root上创建/user/root/firsttest
dfs -ls 查看文件命令
dfs -rm 删除目录
*/
//切换目录
# cd ../
//执行一个例子
//先创建文件 /home/root/hadooptest.txt 文件内容随便输入些英文单词
我们将文件上传到hadoop 文件目录firsttest中.
# sh hadoop dfs -copyFromLocal /home/root/hadooptest.txt firsttest
sh: hadoop: 没有那个文件或目录
//注意这个错误,是firsttest目录不存在
//执行下面命令创建它
# bin/hadoop dfs -mkdir firsttest
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

11/08/13 12:11:52 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 12:11:52 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id

//查看创建的目录
# bin/hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

11/08/13 12:12:46 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 12:12:46 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 2 items
drwxr-xr-x - root supergroup 0 2011-08-13 12:11 /user/root/firsttest
#

//我们再执行wordcount 将执行的结果保存到result目录下
# bin/hadoop jar hadoop-mapred-examples-0.21.0.jar wordcount firsttestttttt result
11/08/13 11:50:49 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 11:50:50 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
11/08/13 11:50:50 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/08/13 11:50:50 INFO mapreduce.JobSubmitter: Cleaning up the staging area hdfs://localhost:9000/home/root/tmp/mapred/staging/root/.staging/job_201108131105_0006
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/root/firsttestttttt
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:245)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:271)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:401)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:418)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:338)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:960)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:976)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:192)


//出现错误, Input path does not exist: hdfs://localhost:9000/user/root/tmp/firsttestttttt
//仔细检查,发现是firsttestttttt 这个目录不存在,是因为你没有创建firsttest 所以不存在,创建了就ok了
//正确的结果
# bin/hadoop jar hadoop-mapred-examples-0.21.0.jar wordcount firsttest result
11/08/13 13:56:11 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 13:56:12 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
11/08/13 13:56:12 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/08/13 13:56:13 INFO input.FileInputFormat: Total input paths to process : 1
11/08/13 13:56:14 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
11/08/13 13:56:14 INFO mapreduce.JobSubmitter: number of splits:1
11/08/13 13:56:14 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
11/08/13 13:56:15 INFO mapreduce.Job: Running job: job_201108131105_0011
11/08/13 13:56:16 INFO mapreduce.Job: map 0% reduce 0%
11/08/13 13:56:43 INFO mapreduce.Job: map 100% reduce 0%
11/08/13 13:57:04 INFO mapreduce.Job: map 100% reduce 100%
11/08/13 13:57:06 INFO mapreduce.Job: Job complete: job_201108131105_0011
11/08/13 13:57:06 INFO mapreduce.Job: Counters: 33
FileInputFormatCounters
BYTES_READ=141
FileSystemCounters
FILE_BYTES_READ=284
FILE_BYTES_WRITTEN=600
HDFS_BYTES_READ=262
HDFS_BYTES_WRITTEN=178
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
Job Counters
Data-local map tasks=1
Total time spent by all maps waiting after reserving slots (ms)=0
Total time spent by all reduces waiting after reserving slots (ms)=0
SLOTS_MILLIS_MAPS=22524
SLOTS_MILLIS_REDUCES=18291
Launched map tasks=1
Launched reduce tasks=1
Map-Reduce Framework
Combine input records=28
Combine output records=25
Failed Shuffles=0
GC time elapsed (ms)=404
Map input records=4
Map output bytes=253
Map output records=28
Merged Map outputs=1
Reduce input groups=25
Reduce input records=25
Reduce output records=25
Reduce shuffle bytes=284
Shuffled Maps =1
Spilled Records=50
SPLIT_RAW_BYTES=121


/**
/这个错误提示Output directory result already exists 因为之前执行过,可以先删除result目录 执行命令 bin/hadoop dfs -rm result
# bin/hadoop jar hadoop-mapred-examples-0.21.0.jar wordcount firsttest result
11/08/13 12:17:24 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 12:17:25 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
11/08/13 12:17:25 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/08/13 12:17:25 INFO mapreduce.JobSubmitter: Cleaning up the staging area hdfs://localhost:9000/home/root/tmp/mapred/staging/root/.staging/job_201108131105_0009
org.apache.hadoop.fs.FileAlreadyExistsException: Output directory result already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:140)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:373)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:334)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:960)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:976)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:59
*/
//我们查看执行结果后的目录
# bin/hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

11/08/13 12:12:46 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 12:12:46 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 2 items
//执行计算的目录
drwxr-xr-x - root supergroup 0 2011-08-13 12:11 /user/root/firsttest
//输出结果的目录
drwxr-xr-x - root supergroup 0 2011-08-13 11:52 /user/root/result

#
/查看结果

//查看执行后result目录下有哪些内容
# bin/hadoop dfs -ls result
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

11/08/13 14:06:23 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 14:06:23 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 2 items
-rw-r--r-- 3 root supergroup 0 2011-08-13 13:57 /user/root/result/_SUCCESS
-rw-r--r-- 3 root supergroup 178 2011-08-13 13:56 /user/root/result/part-r-00000

//查看文件内容
# bin/hadoop dfs -cat result/part-r-00000
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

11/08/13 14:08:34 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/08/13 14:08:34 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
. 1
I 1
a 1
am 1
case 1
do 1
everything 1
example 1
hadoop 2
happy 1
hehaibo 1
hello 1
if 1
is 3
jsut 1
linux 1
play 1
possible 1
sunday 1
this 1
to 1
today 1
with 1
yes 1
you 1
#
[/code]
暂时告一段落。
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值