hadoop 2.2.0伪分布式安装

一:系统准备篇

1.修改hostname

在FC or redhat下面,要修改机器名(hostname),需要同时修改两处 分别是/etc/sysconfig/network的HOSTNAME

和/etc/hosts里边 修改hosts里边是因为你ping一个hostname时候,系统会从hosts这个文件中去找,如果hosts这个文件中没有修改的话,ping会提示unkown hostname

<span style="font-size:18px;">[hadoop@cluster1 ~]$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=cluster1
[hadoop@cluster1 ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[hadoop@cluster1 ~]$ ping cluster1
ping: unknown host cluster1</span>
<span style="font-size:18px;">[hadoop@cluster1 ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.181 cluster1
[hadoop@cluster1 ~]$ ping cluster1
PING cluster1 (192.168.200.181) 56(84) bytes of data.
64 bytes from cluster1 (192.168.200.181): icmp_seq=1 ttl=64 time=0.017 ms
^C
--- cluster1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 961ms
rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms
</span>

2.关闭防火墙 

这里是实验环境 为了设置方便 我们直接关闭防火墙,实际生产环境中一般不会关闭

redhat下面关闭防火墙的命令是 service iptables status  这是暂时关闭防火墙的命令 重启后失效  永久关闭防火墙的命令是:chkconfig iptables off

<span style="font-size: 18px;">service iptables stop</span>

<span style="font-size:18px;">
</span>
<span style="font-size:18px;"></span><pre name="code" class="python"><span style="font-size:18px;">[root@cluster1 hadoop]# service iptables stop</span>
[root@cluster1 hadoop]# service iptables statusiptables: Firewall is not running.

 

3 设置免ssh登陆

[hadoop@cluster1 ~]$ ls -lsa .ssh/
total 12
4 drwx------.  2 hadoop hadoop 4096 Aug 10 00:39 .
4 drwx------. 28 hadoop hadoop 4096 Aug  9 10:12 ..
4 -rw-r--r--   1 hadoop hadoop  391 Aug 10 00:39 known_hosts

1.生成公钥/私钥对

[hadoop@cluster1 ~]$ ssh-keygen  -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
50:2f:f4:fe:c7:71:cb:88:e4:a2:e2:03:02:40:1b:0b hadoop@cluster1
The key's randomart image is:
+--[ RSA 2048]----+
|Eo      o        |
|o +    o o       |
|.o    . . o      |
|.      . o       |
|.       S ..  . .|
| . .      o...oo.|
|  . .    . o..oo |
|     o  . .  .   |
|    ..o.         |
+-----------------+

当前.ssh目录下面会产生公钥id_rsa.pub和私钥id_rsa文件

[hadoop@cluster1 ~]$ ls -lsa .ssh/
total 20
4 drwx------.  2 hadoop hadoop 4096 Aug 10 00:43 .
4 drwx------. 28 hadoop hadoop 4096 Aug  9 10:12 ..
4 -rw-------   1 hadoop hadoop 1675 Aug 10 00:43 id_rsa
4 -rw-r--r--   1 hadoop hadoop  397 Aug 10 00:43 id_rsa.pub
4 -rw-r--r--   1 hadoop hadoop  391 Aug 10 00:39 known_hosts
我们需要将id_rsa.pub添加到 .ssh/authorzied_keys文件里

[hadoop@cluster1 .ssh]$ cat id_rsa.pub  >>  authorized_keys
[hadoop@cluster1 .ssh]$ ll
total 16
-rw-rw-r-- 1 hadoop hadoop  397 Aug 10 00:47 authorized_keys
-rw------- 1 hadoop hadoop 1675 Aug 10 00:43 id_rsa
-rw-r--r-- 1 hadoop hadoop  397 Aug 10 00:43 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop  391 Aug 10 00:39 known_hosts
[hadoop@cluster1 .ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAx8C97B2c9Hq+FJvI2lLBVre3YKoLCi63vBq5ombrctU7GhDyjDEOYjXgtOBUZxPdT1vSWEQVZVvBh/XLKYodMhAxYoVURcen9CSdU+xRdKSlTk085s5meI3VzmZVD1pNvZMRN9eG4BgUKyq+hky+vI0wg6rt1YWoYEHEVn4HzxoEfP5IVwun8ud8ZMbQxJfmdAkz00bfXNM9//lQza8wEf+9hqgkvSGt9VTKpaCU1BKpFd04zn1gSxWy4gXQFJeEqwR/aOGc0PTOm9MovHqHU0C6hAQ3U8VLbJiBPwAfo4q0UDkTXfv78W+Yc9duXuPg2qfEzOmoA/oyZNvv1GjwQ== hadoop@cluster1
[hadoop@cluster1 .ssh]$ 
修改权限 authorized_keys的权限要是600。 .ssh文件夹的文件夹权限要是700

chmod 700 .ssh/
 chmod 600 authorized_keys<span style="color:#ff0000;"> </span>

验证可以免密码登陆本机:

[hadoop@cluster1 .ssh]$ ssh localhost
Last login: Sun Aug 10 00:40:09 2014 from localhost
[hadoop@cluster1 ~]$ ssh cluster1
The authenticity of host 'cluster1 (192.168.200.181)' can't be established.
RSA key fingerprint is f4:e5:62:af:24:35:ae:b6:48:32:34:fd:18:fd:e9:3d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'cluster1,192.168.200.181' (RSA) to the list of known hosts.
Last login: Sun Aug 10 00:57:11 2014 from localhost
[hadoop@cluster1 ~]$<strong> </strong>

3 安装jdk  

在linux的安装过程中 系统会自带安装openjdk  这里我们要先卸载openjdk 然后安装标准jdk
[hadoop@cluster1 ~]$ java -version
java version "1.7.0_45"
OpenJDK Runtime Environment (rhel-2.4.3.3.el6-x86_64 u45-b15)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
[hadoop@cluster1 ~]$ rpm -qa | grep openjdk
java-1.6.0-openjdk-devel-1.6.0.0-1.66.1.13.0.el6.x86_64
java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64
java-1.7.0-openjdk-devel-1.7.0.45-2.4.3.3.el6.x86_64
java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
卸载的时候需要有root权限 否则会出现如下错误:
can't create transaction lock on /var/lib/rpm/.rpm.lock (Permission denied)
[hadoop@cluster1 ~]$ rpm -e --nodeps java-1.6.0-openjdk-devel-1.6.0.0-1.66.1.13.0.el6.x86_64
error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Permission denied)
[hadoop@cluster1 ~]$ su - root
Password: 
[root@cluster1 ~]# rpm -e --nodeps java-1.6.0-openjdk-devel-1.6.0.0-1.66.1.13.0.el6.x86_64
[root@cluster1 ~]# rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64
[root@cluster1 ~]# rpm -e --nodeps java-1.7.0-openjdk-devel-1.7.0.45-2.4.3.3.el6.x86_64
[root@cluster1 ~]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
[root@cluster1 ~]# java -version
-bash: java: command not found
[root@cluster1 ~]# java
-bash: java: command not found
[root@cluster1 ~]# 

首先下载sun的jdk:jdk-7u65-linux-x64.rpm 上传到/home/hadoop/下面 为其增加可执行权限:
chmod u+x  jdk-7u65-linux-x64.rpm
[hadoop@cluster1 ~]$ ll jdk-7u65-linux-x64.rpm 
-rwxrw-r-- 1 hadoop hadoop 126855389 Aug 10 01:05 jdk-7u65-linux-x64.rpm
[root@cluster1 hadoop]# rpm -ivh jdk-7u65-linux-x64.rpm 
Preparing...                ########################################### [100%]
   1:jdk                    ########################################### [100%]
Unpacking JAR files...
<span style="white-space:pre">	</span>rt.jar...
<span style="white-space:pre">	</span>jsse.jar...
<span style="white-space:pre">	</span>charsets.jar...
<span style="white-space:pre">	</span>tools.jar...
<span style="white-space:pre">	</span>localedata.jar...
<span style="white-space:pre">	</span>jfxrt.jar...
[root@cluster1 hadoop]# <span style="font-family: Verdana, Arial, Helvetica, sans-serif;"></span>
设置环境变量:jdk默认会被安装到/usr/java下面
[root@cluster1 java]# ll
total 4
lrwxrwxrwx 1 root root   16 Aug 10 01:16 default -> /usr/java/latest
lrwxrwxrwx 1 root root   12 Aug 10 01:18 jdk -> jdk1.7.0_65/
drwxr-xr-x 8 root root 4096 Aug 10 01:15 jdk1.7.0_65
lrwxrwxrwx 1 root root   21 Aug 10 01:16 latest -> /usr/java/jdk1.7.0_65
export JAVA_HOME=/usr/java/jdk
export JRE_HOME=/usr/java/jdk/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

通过source使环境变量的配置生效
[root@cluster1 java]# source /etc/profile
[root@cluster1 java]# java -version
java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
[root@cluster1 java]# 

二:hadoop安装

1.上传hadoop文件到指定目录并解压:

[hadoop@cluster1 ~]$ tar -xzvf hadoop-2.2.0.tar.gz 
这里为了配置方便 我为hadoop文件夹建立了软连接 [hadoop@cluster1 ~]$ ln -s hadoop-2.2.0 hadoop
[hadoop@cluster1 ~]$ ll hadoop
lrwxrwxrwx 1 hadoop hadoop 12 Aug 10 01:39 hadoop -> hadoop-2.2.0

2.配置hadoop

2.1  修改 /hadoop-env.sh
export JAVA_HOME=/usr/java/jdk
2.2 修改core-site.xml
<span style="font-family:Arial;"><property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hadoop/storage/hdfs/data</value>
  <description>A base for other temporarydirectories.</description>
</property>
<property>
 <name>fs.default.name</name>
 <value>hdfs://cluster1:8010</value>
 <description>The name of the default file system.  A URI whose
 scheme and authority determine the FileSystem implementation.  The
 uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to
 determine the host, port, etc. for a filesystem.</description>
</property></span>
2.3 编辑/home/hduser/hadoop/etc/hadoop/mapred-site.xml:
<configuration>
<property>
        <name>mapred.job.tracker</name>
        <value>cluster1:54311</value>
        <description>The host and port that the MapReduce job tracker runs
        at.  If "local", thenjobs are run in-process as a single map
        and reduce task.</description>
</property>
<property>
        <name>mapred.map.tasks</name>
        <value>10</value>        <description>As a rule of thumb, use 10x the number of slaves(
        i.e., number of tasktrackers).</description>
</property>
<property>
        <name>mapred.reduce.tasks</name>
        <value>2</value>        <description>As a rule of thumb, use 2x the number of slaveprocessors         (i.e., number of tasktrackers).</description>
        </property>
</configuration>
2.4编辑/home/hduser/hadoop/etc/hadoop/hdfs-site.xml,在<configuration>中添加如下:
<configuration>
        <name>dfs.replication</name>
        <value>1</value>
        <description>Default block replication.
        The actual number of replications can be specified when the file iscreated.
        The default is used if replication is not specified in create time.
  </description>
</property>
<configuration>
2.5配置环境变量
在~/.bash_profile下面加入如下内容
export HADOOP_HOME=/home/hadoop/hadoop  
export HADOOP_COMMON_HOME=$HADOOP_HOME  
export HADOOP_HDFS_HOME=$HADOOP_HOME  
export HADOOP_MAPRED_HOME=$HADOOP_HOME  
export HADOOP_YARN_HOME=$HADOOP_HOME  
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop  
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib  
     
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native  
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" 

否则会出现 配置hadoop2.2.0的时候出现错误:could not resolve hostname HotSpot(TM): Name or service not know的错误 参见 http://blog.csdn.net/sunflower_cao/article/details/38513839
2.6测试安装
2.5.1 格式化hdfs
[hadoop@cluster1 bin]$ ./hdfs namenode -format
14/08/10 02:21:12 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = cluster1/192.168.200.181
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = /home/hadoop/hadoop-2.2.0/etc/hadoop:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG:   java = 1.7.0_65
************************************************************/
14/08/10 02:21:12 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hadoop/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
14/08/10 02:21:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-cfc44330-2969-4aa4-8b62-f12968f30ce3
14/08/10 02:21:36 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/08/10 02:21:36 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/08/10 02:21:36 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/08/10 02:21:36 INFO util.GSet: Computing capacity for map BlocksMap
14/08/10 02:21:36 INFO util.GSet: VM type       = 64-bit
14/08/10 02:21:36 INFO util.GSet: 2.0% max memory = 966.7 MB
14/08/10 02:21:36 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/08/10 02:21:36 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/08/10 02:21:36 INFO blockmanagement.BlockManager: defaultReplication         = 1
14/08/10 02:21:36 INFO blockmanagement.BlockManager: maxReplication             = 512
14/08/10 02:21:36 INFO blockmanagement.BlockManager: minReplication             = 1
14/08/10 02:21:36 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/08/10 02:21:36 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/08/10 02:21:36 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/08/10 02:21:36 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/08/10 02:21:37 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
14/08/10 02:21:37 INFO namenode.FSNamesystem: supergroup          = supergroup
14/08/10 02:21:37 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/08/10 02:21:37 INFO namenode.FSNamesystem: HA Enabled: false
14/08/10 02:21:37 INFO namenode.FSNamesystem: Append Enabled: true
14/08/10 02:21:44 INFO util.GSet: Computing capacity for map INodeMap
14/08/10 02:21:44 INFO util.GSet: VM type       = 64-bit
14/08/10 02:21:44 INFO util.GSet: 1.0% max memory = 966.7 MB
14/08/10 02:21:44 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/08/10 02:21:44 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/08/10 02:21:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/08/10 02:21:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/08/10 02:21:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/08/10 02:21:44 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/08/10 02:21:44 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/08/10 02:21:44 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/08/10 02:21:44 INFO util.GSet: VM type       = 64-bit
14/08/10 02:21:44 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
14/08/10 02:21:44 INFO util.GSet: capacity      = 2^15 = 32768 entries
14/08/10 02:21:48 INFO common.Storage: Storage directory /home/hadoop/storage/hdfs/data/dfs/name has been successfully formatted.
14/08/10 02:21:49 INFO namenode.FSImage: Saving image file /home/hadoop/storage/hdfs/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
14/08/10 02:21:51 INFO namenode.FSImage: Image file /home/hadoop/storage/hdfs/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 2 seconds.
14/08/10 02:21:52 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/08/10 02:21:52 INFO util.ExitUtil: Exiting with status 0
14/08/10 02:21:52 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cluster1/192.168.200.181
************************************************************/

2.5.2 启动dfs

hadoop启动的脚本在sbin目录下面:我们运行start-all.sh或者start-dfs.sh和start-yarn.sh:
在这里我直接运行start-all.sh  
[hadoop@cluster1 hadoop]$ sbin/start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/08/10 07:33:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [cluster1]
cluster1: starting namenode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-namenode-cluster1.out
cluster1: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-datanode-cluster1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-secondarynamenode-cluster1.out
14/08/10 07:36:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-cluster1.out
cluster1: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-cluster1.out
[hadoop@cluster1 hadoop]$ 

启动的log在hadoop目录下面的logs文件夹里边 我们可以通过jps查看hadoop是否启动成功:
[hadoop@cluster1 hadoop]$ jps
2524 Jps
2339 NodeManager
2113 SecondaryNameNode
1845 NameNode
2245 ResourceManager
1933 DataNode
出现这些进程说明启动已经成功!另外我们可以通过网页查看启动是否成功:


点击browser the filesystem可以查看hdfs上面的文件 如下图说是
这里边是空的 我们可以通过hadoop的fs命令新建文件夹 上传文件到hdfs上
[hadoop@cluster1 hadoop]$ bin/hadoop fs -mkdir /test
14/08/10 07:43:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@cluster1 hadoop]$ hadoop fs -ls hdfs://cluster1:9000/
14/08/10 07:44:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2014-08-10 07:43 hdfs://cluster1:9000/test
[hadoop@cluster1 hadoop]$ 

这时候在WUI上面可以看到

这里我们将hadoop的logs文件夹下面的log上传到test文件夹下面:
<span style="font-size:18px;">[hadoop@cluster1 hadoop]$ bin/hadoop fs -put logs/*   /test/
14/08/10 07:50:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@cluster1 hadoop]$ bin/hadoop fs -ls    /test/
14/08/10 07:51:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 17 items
-rw-r--r--   1 hadoop supergroup          0 2014-08-10 07:50 /test/SecurityAuth-hadoop.audit
-rw-r--r--   1 hadoop supergroup      63666 2014-08-10 07:50 /test/hadoop-hadoop-datanode-cluster1.log
-rw-r--r--   1 hadoop supergroup        718 2014-08-10 07:50 /test/hadoop-hadoop-datanode-cluster1.out
-rw-r--r--   1 hadoop supergroup        718 2014-08-10 07:50 /test/hadoop-hadoop-datanode-cluster1.out.1
-rw-r--r--   1 hadoop supergroup     104882 2014-08-10 07:50 /test/hadoop-hadoop-namenode-cluster1.log
-rw-r--r--   1 hadoop supergroup        718 2014-08-10 07:50 /test/hadoop-hadoop-namenode-cluster1.out
-rw-r--r--   1 hadoop supergroup        718 2014-08-10 07:50 /test/hadoop-hadoop-namenode-cluster1.out.1
-rw-r--r--   1 hadoop supergroup      43731 2014-08-10 07:50 /test/hadoop-hadoop-secondarynamenode-cluster1.log
-rw-r--r--   1 hadoop supergroup        718 2014-08-10 07:50 /test/hadoop-hadoop-secondarynamenode-cluster1.out
-rw-r--r--   1 hadoop supergroup        718 2014-08-10 07:50 /test/hadoop-hadoop-secondarynamenode-cluster1.out.1
drwxr-xr-x   - hadoop supergroup          0 2014-08-10 07:50 /test/userlogs
-rw-r--r--   1 hadoop supergroup      48177 2014-08-10 07:50 /test/yarn-hadoop-nodemanager-cluster1.log
-rw-r--r--   1 hadoop supergroup        702 2014-08-10 07:50 /test/yarn-hadoop-nodemanager-cluster1.out
-rw-r--r--   1 hadoop supergroup        702 2014-08-10 07:50 /test/yarn-hadoop-nodemanager-cluster1.out.1
-rw-r--r--   1 hadoop supergroup      53821 2014-08-10 07:50 /test/yarn-hadoop-resourcemanager-cluster1.log
-rw-r--r--   1 hadoop supergroup        702 2014-08-10 07:50 /test/yarn-hadoop-resourcemanager-cluster1.out
-rw-r--r--   1 hadoop supergroup        702 2014-08-10 07:50 /test/yarn-hadoop-resourcemanager-cluster1.out.1</span>



运行wordcount程序:

<span style="font-size:18px;">[hadoop@cluster1 hadoop]$ bin/hadoop fs  -put logs/hadoop-hadoop-datanode-cluster1.log  /test/
14/08/10 08:44:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@cluster1 hadoop]$  bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-sources.jar org.apache.hadoop.examples.WordCount /test /output
14/08/10 08:45:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/08/10 08:45:59 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/08/10 08:46:12 INFO input.FileInputFormat: Total input paths to process : 1
14/08/10 08:46:13 INFO mapreduce.JobSubmitter: number of splits:1
14/08/10 08:46:13 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/08/10 08:46:13 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/08/10 08:46:13 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/08/10 08:46:13 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/08/10 08:46:13 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/08/10 08:46:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1407627394787_0002
14/08/10 08:46:23 INFO impl.YarnClientImpl: Submitted application application_1407627394787_0002 to ResourceManager at /0.0.0.0:8032
14/08/10 08:46:25 INFO mapreduce.Job: The url to track the job: http://cluster1:8088/proxy/application_1407627394787_0002/
14/08/10 08:46:25 INFO mapreduce.Job: Running job: job_1407627394787_0002
14/08/10 08:49:07 INFO mapreduce.Job: Job job_1407627394787_0002 running in uber mode : false
14/08/10 08:49:11 INFO mapreduce.Job:  map 0% reduce 0%
14/08/10 08:51:31 INFO mapreduce.Job:  map 67% reduce 0%
14/08/10 08:51:34 INFO mapreduce.Job:  map 100% reduce 0%
14/08/10 08:53:28 INFO mapreduce.Job:  map 100% reduce 67%
14/08/10 08:53:36 INFO mapreduce.Job:  map 100% reduce 100%
14/08/10 08:53:43 INFO mapreduce.Job: Job job_1407627394787_0002 completed successfully
14/08/10 08:53:45 INFO mapreduce.Job: Counters: 43
<span style="white-space:pre">	</span>File System Counters
<span style="white-space:pre">		</span>FILE: Number of bytes read=40298
<span style="white-space:pre">		</span>FILE: Number of bytes written=239143
<span style="white-space:pre">		</span>FILE: Number of read operations=0
<span style="white-space:pre">		</span>FILE: Number of large read operations=0
<span style="white-space:pre">		</span>FILE: Number of write operations=0
<span style="white-space:pre">		</span>HDFS: Number of bytes read=116593
<span style="white-space:pre">		</span>HDFS: Number of bytes written=36169
<span style="white-space:pre">		</span>HDFS: Number of read operations=6
<span style="white-space:pre">		</span>HDFS: Number of large read operations=0
<span style="white-space:pre">		</span>HDFS: Number of write operations=2
<span style="white-space:pre">	</span>Job Counters 
<span style="white-space:pre">		</span>Launched map tasks=1
<span style="white-space:pre">		</span>Launched reduce tasks=1
<span style="white-space:pre">		</span>Data-local map tasks=1
<span style="white-space:pre">		</span>Total time spent by all maps in occupied slots (ms)=138474
<span style="white-space:pre">		</span>Total time spent by all reduces in occupied slots (ms)=119153
<span style="white-space:pre">	</span>Map-Reduce Framework
<span style="white-space:pre">		</span>Map input records=409
<span style="white-space:pre">		</span>Map output records=4938
<span style="white-space:pre">		</span>Map output bytes=136174
<span style="white-space:pre">		</span>Map output materialized bytes=40298
<span style="white-space:pre">		</span>Input split bytes=126
<span style="white-space:pre">		</span>Combine input records=4938
<span style="white-space:pre">		</span>Combine output records=1044
<span style="white-space:pre">		</span>Reduce input groups=1044
<span style="white-space:pre">		</span>Reduce shuffle bytes=40298
<span style="white-space:pre">		</span>Reduce input records=1044
<span style="white-space:pre">		</span>Reduce output records=1044
<span style="white-space:pre">		</span>Spilled Records=2088
<span style="white-space:pre">		</span>Shuffled Maps =1
<span style="white-space:pre">		</span>Failed Shuffles=0
<span style="white-space:pre">		</span>Merged Map outputs=1
<span style="white-space:pre">		</span>GC time elapsed (ms)=7479
<span style="white-space:pre">		</span>CPU time spent (ms)=21740
<span style="white-space:pre">		</span>Physical memory (bytes) snapshot=295862272
<span style="white-space:pre">		</span>Virtual memory (bytes) snapshot=1675780096
<span style="white-space:pre">		</span>Total committed heap usage (bytes)=168628224
<span style="white-space:pre">	</span>Shuffle Errors
<span style="white-space:pre">		</span>BAD_ID=0
<span style="white-space:pre">		</span>CONNECTION=0
<span style="white-space:pre">		</span>IO_ERROR=0
<span style="white-space:pre">		</span>WRONG_LENGTH=0
<span style="white-space:pre">		</span>WRONG_MAP=0
<span style="white-space:pre">		</span>WRONG_REDUCE=0
<span style="white-space:pre">	</span>File Input Format Counters 
<span style="white-space:pre">		</span>Bytes Read=116467
<span style="white-space:pre">	</span>File Output Format Counters 
<span style="white-space:pre">		</span>Bytes Written=36169</span>

这时候我们可以通过8088这个页面查看job运行的情况:


查看结果:


通过hdfs命令查看结果:

[hadoop@cluster1 hadoop]$ hadoop fs -ls /output
14/08/10 09:00:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   1 hadoop supergroup          0 2014-08-10 08:53 /output/_SUCCESS
-rw-r--r--   1 hadoop supergroup      36169 2014-08-10 08:53 /output/part-r-00000

[hadoop@cluster1 hadoop]$ bin/hadoop fs -cat /output/part-r-00000
on<span style="white-space:pre">	</span>14
op:<span style="white-space:pre">	</span>81
org.apache.hadoop.hdfs.server.common.Storage:<span style="white-space:pre">	</span>9
org.apache.hadoop.hdfs.server.common.Util:<span style="white-space:pre">	</span>4
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:<span style="white-space:pre">	</span>41
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner:<span style="white-space:pre">	</span>2
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:<span style="white-space:pre">	</span>81
org.apache.hadoop.hdfs.server.datanode.DataNode:<span style="white-space:pre">	</span>121
org.apache.hadoop.hdfs.server.datanode.DirectoryScanner:<span style="white-space:pre">	</span>2
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:<span style="white-space:pre">	</span>72
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:<span style="white-space:pre">	</span>18
org.apache.hadoop.http.HttpServer:<span style="white-space:pre">	</span>10
org.apache.hadoop.ipc.Server:<span style="white-space:pre">	</span>6
org.apache.hadoop.metrics2.impl.MetricsConfig:<span style="white-space:pre">	</span>2
org.apache.hadoop.metrics2.impl.MetricsSystemImpl:<span style="white-space:pre">	</span>4
org.apache.hadoop.util.GSet:<span style="white-space:pre">	</span>8
org.apache.hadoop.util.NativeCodeLoader:<span style="white-space:pre">	</span>2
org.mortbay.log.Slf4jLog<span style="white-space:pre">	</span>2
org.mortbay.log:<span style="white-space:pre">	</span>6
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)<span style="white-space:pre">	</span>2
period<span style="white-space:pre">	</span>2
platform...<span style="white-space:pre">	</span>2
pool<span style="white-space:pre">	</span>27
port<span style="white-space:pre">	</span>4
process<span style="white-space:pre">	</span>1
processed<span style="white-space:pre">	</span>3
processing<span style="white-space:pre">	</span>3
properties<span style="white-space:pre">	</span>2
received<span style="white-space:pre">	</span>2
registered<span style="white-space:pre">	</span>4
replicas<span style="white-space:pre">	</span>8
report,<span style="white-space:pre">	</span>3
request<span style="white-space:pre">	</span>2
scan<span style="white-space:pre">	</span>6
second(s).<span style="white-space:pre">	</span>2
sent<span style="white-space:pre">	</span>3
server<span style="white-space:pre">	</span>6
service<span style="white-space:pre">	</span>12
should<span style="white-space:pre">	</span>4
signal<span style="white-space:pre">	</span>2
size=1<span style="white-space:pre">	</span>2
snapshot<span style="white-space:pre">	</span>2
specified<span style="white-space:pre">	</span>4
src:<span style="white-space:pre">	</span>120
srvID:<span style="white-space:pre">	</span>81
started<span style="white-space:pre">	</span>2
starting<span style="white-space:pre">	</span>8
state<span style="white-space:pre">	</span>2
static<span style="white-space:pre">	</span>2
static_user_filter<span style="white-space:pre">	</span>6
storage:<span style="white-space:pre">	</span>2
streaming<span style="white-space:pre">	</span>2
succeeded<span style="white-space:pre">	</span>39
successfully<span style="white-space:pre">	</span>2
system<span style="white-space:pre">	</span>2
taken<span style="white-space:pre">	</span>2
terminating<span style="white-space:pre">	</span>39
time<span style="white-space:pre">	</span>4
to<span style="white-space:pre">	</span>46
took<span style="white-space:pre">	</span>3
trying<span style="white-space:pre">	</span>2
txid=1<span style="white-space:pre">	</span>1
txid=4<span style="white-space:pre">	</span>1
type<span style="white-space:pre">	</span>2
type=LAST_IN_PIPELINE,<span style="white-space:pre">	</span>39
unknown)<span style="white-space:pre">	</span>2
up<span style="white-space:pre">	</span>2
update<span style="white-space:pre">	</span>4
using<span style="white-space:pre">	</span>4
version<span style="white-space:pre">	</span>2
via<span style="white-space:pre">	</span>2
volume<span style="white-space:pre">	</span>8
where<span style="white-space:pre">	</span>2
with<span style="white-space:pre">	</span>10


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值