Hadoop installation Local (Standalone) Mode

Now you are ready to start your Hadoopcluster in one of the three supported modes:

  • Standalone) Mode
  • Pseudo-Distributed Mode
  • Fully-Distributed Mode
  • HA Fully-Distributed Mode
安装jdk,配置环境变量

  1. Java™ must be installed. Recommended Java versions are described at HadoopJavaVersions.
# su – root
[root@localhost ~]# cd /usr/local
# 上传jdk安装包jdk-8u121-linux-x64.tar.gz
[root@localhost local]# rz
[root@localhost local]# ll
# 解压jdk-8u121-linux-x64.tar.gz
[root@localhost local]# tar -zxvf jdk-8u121-linux-x64.tar.gz
# 可建立软连接,可跳过(路径根据自己需要来设置)
# 建立一个链接以节省目录长度
ln -s /usr/java/jdk1.8.0_60/usr/jdk
# 删除安装包
[root@localhost local]# rm -rf jdk-8u121-linux-x64.tar.gz
# 重命名
[root@localhost local]# mv jdk1.8.0_121 jdk
# 用root用户设置java环境变量
[root@localhost local]# vi /etc/profile
在 最后一行后面添加:
#java environment variables
JAVA_HOME=/usr/local/jdk
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME CLASSPATH
# 使新环境变量生效
[root@localhost local]# source /etc/profile
# 检查jdk环境
[root@localhost local]# java -version
javaversion "1.7.0_45"
OpenJDKRuntime Environment (rhel-2.4.3.3.el6-x86_64u45-b15)
OpenJDK64-Bit Server VM (build 24.45-b08, mixed mode)


配置网络与主机名

# 临时修改主机名
[root@bigdata ~]# hostname bigdata
# 永久修改主机名
[root@bigdata ~]$ vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=bigdata
# 配置主机名与IP的映射关系
[root@bigdata ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.110 bigdata


创建hadoop用户和hadoop
# 创建hadoop组
groupadd hadoop
# 添加hadoop用户,指定用户所属组为hadoop
useradd hadoop -g hadoop
# 为hadoop用户设定登录密码
passwd hadoop
# 创建文件夹
mkdir /home/hadoop
# 更改文件夹权限:hadoop组下的hadoop用户
chown -R hadoop:hadoop /home/hadoop
# 查看hadoop用户
[root@localhost ~]# id hadoop
uid=500(hadoop)gid=500(hadoop) 组=500(hadoop)

 

免密码ssh设置

[hadoop@localhost ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key(/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in/home/hadoop/.ssh/id_rsa.
Your public key has been saved in/home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
78:80:cd:16:42:ce:fd:2d:7a:00:6e:de:67:be:3a:6bhadoop@localhost.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
|   .o .          |
|   o * .         |
|    = *          |
|   . o + .       |
|    o o S.      |
|   o . + .       |
|    . o +        |
|      E=         |
|     .o+o.       |
+-----------------+
[hadoop@localhost ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@localhost ~]$ ll -a
总用量 40
drwx------. 6 hadoop hadoop 4096 5月   6 14:29 .
drwxr-xr-x. 3 root  root   4096 5月   6 12:18 ..
-rw-------. 1 hadoop hadoop  476 5月  6 14:26 .bash_history
-rw-r--r--. 1 hadoop hadoop   18 7月 18 2013 .bash_logout
-rw-r--r--. 1 hadoop hadoop  176 7月 18 2013 .bash_profile
-rw-r--r--. 1 hadoop hadoop  124 7月 18 2013 .bashrc
drwxr-xr-x. 2 hadoop hadoop 4096 11月 122010 .gnome2
drwxr-xr-x. 9 hadoop hadoop 4096 11月 142014 hadoop
drwxrwxr-x. 2 hadoop hadoop 4096 5月   6 12:55 .oracle_jre_usage
drwx------. 2 hadoop hadoop 4096 5月   6 14:29 .ssh
[hadoop@localhost ~]$ chmod 0600 ~/.ssh/authorized_keys
# 现在确认能否不输入口令就用ssh登录localhost: $ ssh localhost
[hadoop@localhost ~]$ ssh localhost
The authenticity of host 'localhost (::1)' can't beestablished.
RSA key fingerprint is46:73:c3:15:e8:5c:a9:14:c3:db:d6:33:05:64:6b:d6.
Are you sure you want to continue connecting (yes/no)?yes
Warning: Permanently added 'localhost' (RSA) to thelist of known hosts.
Last login: Sat May 6 14:26:50 2017 from localhost
[hadoop@localhost ~]$

 

 

安装hadoop

[root@localhost local]# su - hadoop
[hadoop@localhost ~]$ pwd
/home/hadoop
# 上传hadoop-2.6.0.tar.gz
[hadoop@localhost ~]$ rz
[hadoop@localhost ~]$ tar -zxvf hadoop-2.6.0.tar.gz
[hadoop@localhost ~]$ rm -rf hadoop-2.6.0.tar.gz
[hadoop@localhost ~]$ mv hadoop-2.6.0hadoop
[hadoop@localhost ~]$ ll
总用量 4
drwxr-xr-x. 9 hadoop hadoop 4096 11月 142014 hadoop


# 查看hadoop是32位还是64位
[hadoop@bigdata native]$ file /home/hadoop/hadoop/lib/native/libhadoop.so.1.0.0
/home/hadoop/hadoop/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped

# 查看hadoop版本
[hadoop@localhost bin]$ /home/hadoop/hadoop/bin/hadoop version
Hadoop 2.6.0
Subversionhttps://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum18e43357c8f927c0695f1e9522859d6a
This command was run using/home/hadoop/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar
[hadoop@localhost bin]$
# 修改配置文件
[hadoop@localhost ~]$ cd ~/hadoop/etc/hadoop/
Hadoop可以在单节点上以伪分布式的方式运行,Hadoop进程以分离的 Java进程来运行,节点既作为 NameNode也作为 DataNode,同时,读取的是 HDFS中的文件。
Hadoop的配置文件位于 /usr/local/hadoop/etc/hadoop/中,伪分布式需要修改2个配置文件 core-site.xml 和 hdfs-site.xml 。Hadoop的配置文件是 xml 格式,每个配置以声明 property的 name和 value的方式来实现。

修改配置文件 core-site.xml (通过 gedit编辑会比较方便: gedit./etc/hadoop/core-site.xml),将当中的
<configuration>
</configuration>
XML
修改为下面配置:
<configuration>
    <property>
         <name>hadoop.tmp.dir</name>
            <value>file:/home/hadoop/hadoop/tmp</value>
            <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
XML
同样的,修改配置文件 hdfs-site.xml:
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/hadoop/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/hadoop/hadoop/tmp/dfs/data</value>
    </property>
</configuration>

# 创建相关文件夹
mkdir /home/hadoop/hadoop/tmp
mkdir /home/hadoop/hadoop/tmp/dfs
mkdir /home/hadoop/hadoop/tmp/dfs/data
mkdir /home/hadoop/hadoop/tmp/dfs/name
cd ~/hadoop/tmp/dfs
vi HADOOP_HOME/hadoop/etc/hadoop/hadoop-env.sh
 需要将${JAVA_HOME}替换为自己的JAVA_HOME所在路径.
# 格式化一个新的分布式文件系统:$ bin/hadoop namenode -format
[hadoop@localhost hadoop]$ ~/hadoop/bin/hadoop namenode -format
17/05/0614:48:13 INFO common.Storage: Storage directory/home/hadoop/hadoop/tmp/dfs/name has been successfully formatted.
17/05/0614:48:13 INFO namenode.NNStorageRetentionManager: Going to retain 1 images withtxid >= 0
17/05/0614:48:13 INFO util.ExitUtil: Exiting with status 0
17/05/0614:48:13 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG:Shutting down NameNode at localhost/127.0.0.1
************************************************************/
# 查看格式化后的目录:
[hadoop@localhost hadoop]$ ll -R /home/hadoop/hadoop/tmp
[hadoop@localhost ~]$ ~/hadoop/sbin/start-all.sh
[hadoop@localhost ~]$ jps
26640 ResourceManager
26737 NodeManager
26502 SecondaryNameNode
26343 DataNode
27034 Jps
26221 NameNode
[hadoop@localhost ~]$

# 切换到root用户
su – root
# 开放监控端口
[root@localhost local]# /sbin/iptables -I INPUT -p tcp --dport 50070 -j ACCEPT
[root@localhost local]# /sbin/iptables -I INPUT -p tcp --dport 9000 -j ACCEPT
[root@localhost local]# /sbin/iptables -I INPUT -p tcp --dport 8088 -j ACCEPT

# iptables:将防火墙规则保存到 /etc/sysconfig/iptables
[root@localhost local]# /etc/rc.d/init.d/iptables save
# [root@localhost local]# /etc/init.d/iptables status

检查运行状态
页面查看:
http://192.168.56.101:50070
http://192.168.56.101:8088


设置Hadoop环境变量
可以通过附加下面的命令到~/.bashrc文件中设置Hadoop环境变量。

export HADOOP_HOME=/usr/local/hadoop 
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 
export HADOOP_INSTALL=$HADOOP_HOME 
现在,提交所有更改到当前正在运行的系统。

$ source ~/.bashrc 


运行示例程序


在CentOS上运行示例程序

# 如果运行程序主类有包名,运行程序主类全路径
[hadoop@org hadoop]$ vi /home/hadoop/helloworld.txt
# 将如下内容写入到helloworld.txt
    Hello Java
    Hello C
    Hello Java
    Hello C++
# 将本地文件/home/hadoop/helloworld.txt上传到HDFS中hdfs://org.alpha.elephant:9000/home/hadoop/目录下
[hadoop@org hadoop]$ /home/hadoop/hadoop/bin/hadoop dfs -put /home/hadoop/helloworld.txt
[hadoop@org hadoop]$ cd /home/hadoop/hadoop/share/hadoop/mapreduce /home/hadoop/helloworld.txt hdfs://org.alpha.elephant:9000/home/hadoop/
[hadoop@org mapreduce]$ ls
hadoop-mapreduce-client-app-2.6.0.jar     hadoop-mapreduce-client-hs-2.6.0.jar          hadoop-mapreduce-client-jobclient-2.6.0-tests.jar  lib
hadoop-mapreduce-client-common-2.6.0.jar  hadoop-mapreduce-client-hs-plugins-2.6.0.jar  hadoop-mapreduce-client-shuffle-2.6.0.jar          lib-examples
hadoop-mapreduce-client-core-2.6.0.jar    hadoop-mapreduce-client-jobclient-2.6.0.jar   hadoop-mapreduce-examples-2.6.0.jar                sources
[hadoop@org mapreduce]$ /home/hadoop/hadoop/bin/hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /home/hadoop/helloworld.txt /home/hadoop/
18/05/13 15:37:17 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
18/05/13 15:37:17 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
18/05/13 15:37:18 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/home/hadoop/hadoop/tmp/mapred/staging/hadoop1598597489/.staging/job_local1598597489_0001
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://org.alpha.elephant:9000/home/hadoop/helloworld.txt
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
        at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
[hadoop@org mapreduce]$ /home/hadoop/hadoop/bin/hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount hdfs://org.alpha.elephant:9000/input/helloworld.txt hdfs://org.alpha.elephant:9000/output/
18/05/13 15:39:07 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
18/05/13 15:39:07 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://org.alpha.elephant:9000/output already exists
        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
        at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:562)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
        at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
[hadoop@org mapreduce]$ /home/hadoop/hadoop/bin/hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount hdfs://org.alpha.elephant:9000/input/helloworld.txt hdfs://org.alpha.elephant:9000/output1
18/05/13 15:39:19 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
18/05/13 15:39:19 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
18/05/13 15:39:19 INFO input.FileInputFormat: Total input paths to process : 1
18/05/13 15:39:19 INFO mapreduce.JobSubmitter: number of splits:1
18/05/13 15:39:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local622730093_0001
18/05/13 15:39:20 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
18/05/13 15:39:20 INFO mapreduce.Job: Running job: job_local622730093_0001
18/05/13 15:39:20 INFO mapred.LocalJobRunner: OutputCommitter set in config null
18/05/13 15:39:20 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
18/05/13 15:39:21 INFO mapred.LocalJobRunner: Waiting for map tasks
18/05/13 15:39:21 INFO mapred.LocalJobRunner: Starting task: attempt_local622730093_0001_m_000000_0
18/05/13 15:39:21 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
18/05/13 15:39:21 INFO mapred.MapTask: Processing split: hdfs://org.alpha.elephant:9000/input/helloworld.txt:0+43
18/05/13 15:39:21 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
18/05/13 15:39:21 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
18/05/13 15:39:21 INFO mapred.MapTask: soft limit at 83886080
18/05/13 15:39:21 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
18/05/13 15:39:21 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
18/05/13 15:39:21 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
18/05/13 15:39:21 INFO mapreduce.Job: Job job_local622730093_0001 running in uber mode : false
18/05/13 15:39:21 INFO mapreduce.Job:  map 0% reduce 0%
18/05/13 15:39:21 INFO mapred.LocalJobRunner: 
18/05/13 15:39:21 INFO mapred.MapTask: Starting flush of map output
18/05/13 15:39:21 INFO mapred.MapTask: Spilling map output
18/05/13 15:39:21 INFO mapred.MapTask: bufstart = 0; bufend = 74; bufvoid = 104857600
18/05/13 15:39:21 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214368(104857472); length = 29/6553600
18/05/13 15:39:21 INFO mapred.MapTask: Finished spill 0
18/05/13 15:39:21 INFO mapred.Task: Task:attempt_local622730093_0001_m_000000_0 is done. And is in the process of committing
18/05/13 15:39:21 INFO mapred.LocalJobRunner: map
18/05/13 15:39:21 INFO mapred.Task: Task 'attempt_local622730093_0001_m_000000_0' done.
18/05/13 15:39:21 INFO mapred.LocalJobRunner: Finishing task: attempt_local622730093_0001_m_000000_0
18/05/13 15:39:21 INFO mapred.LocalJobRunner: map task executor complete.
18/05/13 15:39:21 INFO mapred.LocalJobRunner: Waiting for reduce tasks
18/05/13 15:39:21 INFO mapred.LocalJobRunner: Starting task: attempt_local622730093_0001_r_000000_0
18/05/13 15:39:21 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
18/05/13 15:39:21 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@177733c0
18/05/13 15:39:21 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=363285696, maxSingleShuffleLimit=90821424, mergeThreshold=239768576, ioSortFactor=10, memToMemMergeOutputsThreshold=10
18/05/13 15:39:21 INFO reduce.EventFetcher: attempt_local622730093_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
18/05/13 15:39:22 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local622730093_0001_m_000000_0 decomp: 56 len: 60 to MEMORY
18/05/13 15:39:22 INFO reduce.InMemoryMapOutput: Read 56 bytes from map-output for attempt_local622730093_0001_m_000000_0
18/05/13 15:39:22 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
18/05/13 15:39:22 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 56, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->56
18/05/13 15:39:22 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
18/05/13 15:39:22 INFO mapred.LocalJobRunner: 1 / 1 copied.
18/05/13 15:39:22 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
18/05/13 15:39:22 INFO mapred.Merger: Merging 1 sorted segments
18/05/13 15:39:22 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 52 bytes
18/05/13 15:39:22 INFO reduce.MergeManagerImpl: Merged 1 segments, 56 bytes to disk to satisfy reduce memory limit
18/05/13 15:39:22 INFO reduce.MergeManagerImpl: Merging 1 files, 60 bytes from disk
18/05/13 15:39:22 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
18/05/13 15:39:22 INFO mapred.Merger: Merging 1 sorted segments
18/05/13 15:39:22 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 52 bytes
18/05/13 15:39:22 INFO mapred.LocalJobRunner: 1 / 1 copied.
18/05/13 15:39:22 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
18/05/13 15:39:22 INFO mapreduce.Job:  map 100% reduce 0%
18/05/13 15:39:22 INFO mapred.Task: Task:attempt_local622730093_0001_r_000000_0 is done. And is in the process of committing
18/05/13 15:39:22 INFO mapred.LocalJobRunner: 1 / 1 copied.
18/05/13 15:39:22 INFO mapred.Task: Task attempt_local622730093_0001_r_000000_0 is allowed to commit now
18/05/13 15:39:22 INFO output.FileOutputCommitter: Saved output of task 'attempt_local622730093_0001_r_000000_0' to hdfs://org.alpha.elephant:9000/output1/_temporary/0/task_local622730093_0001_r_000000
18/05/13 15:39:22 INFO mapred.LocalJobRunner: reduce > reduce
18/05/13 15:39:22 INFO mapred.Task: Task 'attempt_local622730093_0001_r_000000_0' done.
18/05/13 15:39:22 INFO mapred.LocalJobRunner: Finishing task: attempt_local622730093_0001_r_000000_0
18/05/13 15:39:22 INFO mapred.LocalJobRunner: reduce task executor complete.
18/05/13 15:39:23 INFO mapreduce.Job:  map 100% reduce 100%
18/05/13 15:39:23 INFO mapreduce.Job: Job job_local622730093_0001 completed successfully
18/05/13 15:39:23 INFO mapreduce.Job: Counters: 38
        File System Counters
                FILE: Number of bytes read=541154
                FILE: Number of bytes written=1055720
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=86
                HDFS: Number of bytes written=34
                HDFS: Number of read operations=13
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=4
        Map-Reduce Framework
                Map input records=4
                Map output records=8
                Map output bytes=74
                Map output materialized bytes=60
                Input split bytes=116
                Combine input records=8
                Combine output records=5
                Reduce input groups=5
                Reduce shuffle bytes=60
                Reduce input records=5
                Reduce output records=5
                Spilled Records=10
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=80
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
                Total committed heap usage (bytes)=241442816
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=43
        File Output Format Counters 
                Bytes Written=34
[hadoop@org mapreduce]$ 


通过yarn查看任务执行信息 http://192.168.56.101:8088

# 查看mapreduce运行结果
[hadoop@org subdir0]$ /home/hadoop/hadoop/bin/hdfs dfs -cat /output1/part-r-00000
C       1
C++     1
Hello   4
Java    1
Python  1
[hadoop@org subdir0]$ 


# 查看mapreduce生成文件所在位置
[hadoop@org subdir0]$ cat blk_1073741827
C       1
C++     1
Hello   4
Java    1
Python  1
[hadoop@org subdir0]$ pwd
/home/hadoop/hadoop/tmp/dfs/data/current/BP-50902565-192.168.56.101-1525844802452/current/finalized/subdir0/subdir0


===================================================================================================================
[hadoop@org subdir0]$ cat blk_1073741827
C       1
C++     1
Hello   4
Java    1
Python  1
[hadoop@org subdir0]$ pwd
/home/hadoop/hadoop/tmp/dfs/data/current/BP-50902565-192.168.56.101-1525844802452/current/finalized/subdir0/subdir0
[hadoop@org subdir0]$ /home/hadoop/hadoop/bin/hdfs dfs -cat /input/part-r-00000
cat: `/input/part-r-00000': No such file or directory
[hadoop@org subdir0]$ /home/hadoop/hadoop/bin/hdfs dfs -cat /output1/part-r-00000
C       1
C++     1
Hello   4
Java    1
Python  1
[hadoop@org subdir0]$ 
===================================================================================================================

在Windows 7环境下用Eclipse运行示例程序

报错及解决

windows7 eclipse hadoop mapreduce

1. windows7 eclipse首次与运行 Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory

解决:将以下jar导入到classpath路径下 slf4j-api-1.7.21.jar slf4j-log4j12-1.7.10.jar


2.1. Failed to locate the winutils binary in the hadoop binary path Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
2.2 util.NativeCodeLoader (NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-05-13 16:44:18,543 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(373)) - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
2.3 java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

解决:

- java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V

解决:原因是hadoop与版本要与hadoop-common-x.y.z-bin不匹配,下载或自己编译对应版本的hadoop-common-x.y.z-bin

3. Exception in thread "main" java.net.ConnectException: Call From OHY19TB8VRS9IRY/192.168.56.1 to org.alpha.elephant:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see:http://wiki.apache.org/hadoop/ConnectionRefused Caused by: java.net.ConnectException: Connection refused: no further information

解决:未启动Hadoop 以hadoop用户执行命令 /home/hadoop/hadoop/sbin/start-all.sh


4. Call 192.168.56.1 From to 192.168.56.101 error
hdfs://org.alpha.elephant:9000/ ==> hdfs://192.168.56.101:9000/

FileInputFormat.addInputPath(job, new Path("hdfs://192.168.56.101:9000/input/"));// args[0]
FileOutputFormat.setOutputPath(job, new Path("hdfs://192.168.56.101:9000/output2"));// args[1]

5. org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://192.168.56.101:9000/output already exists

解决:目录已存在,可删除已存在目录,或在代码中指定别的目录


6. Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=Administrator, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x

解决方案: [hadoop@org ~]$ vi /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml

添加属性配置
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>

7. Windows7运行Hadoop wordcount程序成功日志
2018-05-14 13:52:20,698 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1049)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2018-05-14 13:52:20,702 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2018-05-14 13:52:20,989 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 1
2018-05-14 13:52:21,066 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(494)) - number of splits:1
2018-05-14 13:52:21,176 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(583)) - Submitting tokens for job: job_local1139779914_0001
2018-05-14 13:52:21,355 INFO  [main] mapreduce.Job (Job.java:submit(1300)) - The url to track the job: http://localhost:8080/
2018-05-14 13:52:21,356 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1345)) - Running job: job_local1139779914_0001
2018-05-14 13:52:21,357 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
2018-05-14 13:52:21,367 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-05-14 13:52:21,463 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
2018-05-14 13:52:21,464 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1139779914_0001_m_000000_0
2018-05-14 13:52:21,491 INFO  [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
2018-05-14 13:52:21,640 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@105e4712
2018-05-14 13:52:21,647 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(753)) - Processing split: hdfs://192.168.56.101:9000/input/helloworld.txt:0+43
2018-05-14 13:52:21,709 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1202)) - (EQUATOR) 0 kvi 26214396(104857584)
2018-05-14 13:52:21,709 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(995)) - mapreduce.task.io.sort.mb: 100
2018-05-14 13:52:21,711 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(996)) - soft limit at 83886080
2018-05-14 13:52:21,711 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(997)) - bufstart = 0; bufvoid = 104857600
2018-05-14 13:52:21,711 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(998)) - kvstart = 26214396; length = 6553600
2018-05-14 13:52:21,716 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(402)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2018-05-14 13:52:22,012 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) -
2018-05-14 13:52:22,015 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1457)) - Starting flush of map output
2018-05-14 13:52:22,015 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1475)) - Spilling map output
2018-05-14 13:52:22,016 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1476)) - bufstart = 0; bufend = 74; bufvoid = 104857600
2018-05-14 13:52:22,016 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1478)) - kvstart = 26214396(104857584); kvend = 26214368(104857472); length = 29/6553600
2018-05-14 13:52:22,036 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1660)) - Finished spill 0
2018-05-14 13:52:22,042 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local1139779914_0001_m_000000_0 is done. And is in the process of committing
2018-05-14 13:52:22,072 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
2018-05-14 13:52:22,072 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1139779914_0001_m_000000_0' done.
2018-05-14 13:52:22,072 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1139779914_0001_m_000000_0
2018-05-14 13:52:22,072 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
2018-05-14 13:52:22,074 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
2018-05-14 13:52:22,074 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local1139779914_0001_r_000000_0
2018-05-14 13:52:22,081 INFO  [pool-6-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
2018-05-14 13:52:22,166 INFO  [pool-6-thread-1] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@4166d6d3
2018-05-14 13:52:22,169 INFO  [pool-6-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@63f841e6
2018-05-14 13:52:22,184 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(196)) - MergerManager: memoryLimit=1323178368, maxSingleShuffleLimit=330794592, mergeThreshold=873297728, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2018-05-14 13:52:22,187 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local1139779914_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2018-05-14 13:52:22,215 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(141)) - localfetcher#1 about to shuffle output of map attempt_local1139779914_0001_m_000000_0 decomp: 56 len: 60 to MEMORY
2018-05-14 13:52:22,233 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 56 bytes from map-output for attempt_local1139779914_0001_m_000000_0
2018-05-14 13:52:22,236 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(314)) - closeInMemoryFile -> map-output of size: 56, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->56
2018-05-14 13:52:22,237 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
2018-05-14 13:52:22,238 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2018-05-14 13:52:22,238 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(674)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2018-05-14 13:52:22,253 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(597)) - Merging 1 sorted segments
2018-05-14 13:52:22,253 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(696)) - Down to the last merge-pass, with 1 segments left of total size: 52 bytes
2018-05-14 13:52:22,255 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(751)) - Merged 1 segments, 56 bytes to disk to satisfy reduce memory limit
2018-05-14 13:52:22,256 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(781)) - Merging 1 files, 60 bytes from disk
2018-05-14 13:52:22,256 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(796)) - Merging 0 segments, 0 bytes from memory into reduce
2018-05-14 13:52:22,257 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(597)) - Merging 1 sorted segments
2018-05-14 13:52:22,258 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(696)) - Down to the last merge-pass, with 1 segments left of total size: 52 bytes
2018-05-14 13:52:22,258 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2018-05-14 13:52:22,345 INFO  [pool-6-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1049)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2018-05-14 13:52:22,359 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1366)) - Job job_local1139779914_0001 running in uber mode : false
2018-05-14 13:52:22,360 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) -  map 100% reduce 0%
2018-05-14 13:52:23,253 INFO  [pool-6-thread-1] mapred.Task (Task.java:done(1001)) - Task:attempt_local1139779914_0001_r_000000_0 is done. And is in the process of committing
2018-05-14 13:52:23,256 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2018-05-14 13:52:23,256 INFO  [pool-6-thread-1] mapred.Task (Task.java:commit(1162)) - Task attempt_local1139779914_0001_r_000000_0 is allowed to commit now
2018-05-14 13:52:23,370 INFO  [pool-6-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task 'attempt_local1139779914_0001_r_000000_0' to hdfs://192.168.56.101:9000/output4/_temporary/0/task_local1139779914_0001_r_000000
2018-05-14 13:52:23,371 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce
2018-05-14 13:52:23,371 INFO  [pool-6-thread-1] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1139779914_0001_r_000000_0' done.
2018-05-14 13:52:23,372 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local1139779914_0001_r_000000_0
2018-05-14 13:52:23,372 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
2018-05-14 13:52:23,497 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) -  map 100% reduce 100%
2018-05-14 13:52:24,499 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1384)) - Job job_local1139779914_0001 completed successfully
2018-05-14 13:52:24,529 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1391)) - Counters: 38
    File System Counters
        FILE: Number of bytes read=541146
        FILE: Number of bytes written=1073282
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=86
        HDFS: Number of bytes written=34
        HDFS: Number of read operations=13
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=4
    Map-Reduce Framework
        Map input records=4
        Map output records=8
        Map output bytes=74
        Map output materialized bytes=60
        Input split bytes=112
        Combine input records=8
        Combine output records=5
        Reduce input groups=5
        Reduce shuffle bytes=60
        Reduce input records=5
        Reduce output records=5
        Spilled Records=10
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=0
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0
        Total committed heap usage (bytes)=531234816
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=43
    File Output Format Counters
        Bytes Written=34
8. 查看格式化后的目录:
[hadoop@org subdir0]$ ll /home/hadoop/hadoop/tmp/
total 0
drwxrwxr-x. 5 hadoop hadoop 48 May  9 13:49 dfs
drwxrwxr-x. 4 hadoop hadoop 32 May 13 15:39 mapred
drwxr-xr-x. 5 hadoop hadoop 54 May 13 17:10 nm-local-dir
[hadoop@org subdir0]$ ll /home/hadoop/hadoop/tmp/dfs
total 0
drwx------. 3 hadoop hadoop 38 May 13 15:14 data
drwxrwxr-x. 3 hadoop hadoop 38 May 13 15:14 name
drwxrwxr-x. 3 hadoop hadoop 38 May 13 15:14 namesecondary
[hadoop@org subdir0]$ ll /home/hadoop/hadoop/tmp/dfs/name
total 8
drwxrwxr-x. 2 hadoop hadoop 4096 May 13 17:06 current
-rw-rw-r--. 1 hadoop hadoop   23 May 13 15:14 in_use.lock
[hadoop@org subdir0]$ ll /home/hadoop/hadoop/tmp/dfs/name/current/
total 4168
-rw-rw-r--. 1 hadoop hadoop 1048576 May  9 13:49 edits_0000000000000000001-0000000000000000001
-rw-rw-r--. 1 hadoop hadoop      42 May  9 14:05 edits_0000000000000000002-0000000000000000003
-rw-rw-r--. 1 hadoop hadoop      42 May  9 15:05 edits_0000000000000000004-0000000000000000005
-rw-rw-r--. 1 hadoop hadoop      42 May  9 16:05 edits_0000000000000000006-0000000000000000007
-rw-rw-r--. 1 hadoop hadoop      42 May  9 17:05 edits_0000000000000000008-0000000000000000009
-rw-rw-r--. 1 hadoop hadoop 1048576 May  9 17:05 edits_0000000000000000010-0000000000000000010
-rw-rw-r--. 1 hadoop hadoop    1399 May  9 22:02 edits_0000000000000000011-0000000000000000028
-rw-rw-r--. 1 hadoop hadoop      42 May  9 23:02 edits_0000000000000000029-0000000000000000030
-rw-rw-r--. 1 hadoop hadoop      42 May 10 00:02 edits_0000000000000000031-0000000000000000032
-rw-rw-r--. 1 hadoop hadoop      42 May 10 01:02 edits_0000000000000000033-0000000000000000034
-rw-rw-r--. 1 hadoop hadoop      42 May 10 02:02 edits_0000000000000000035-0000000000000000036
-rw-rw-r--. 1 hadoop hadoop      42 May 10 03:02 edits_0000000000000000037-0000000000000000038
-rw-rw-r--. 1 hadoop hadoop 1048576 May 10 03:02 edits_0000000000000000039-0000000000000000039
-rw-rw-r--. 1 hadoop hadoop    1844 May 13 16:06 edits_0000000000000000040-0000000000000000057
-rw-rw-r--. 1 hadoop hadoop      42 May 13 17:06 edits_0000000000000000058-0000000000000000059
-rw-rw-r--. 1 hadoop hadoop 1048576 May 13 17:06 edits_inprogress_0000000000000000060
-rw-rw-r--. 1 hadoop hadoop     760 May 13 16:06 fsimage_0000000000000000057
-rw-rw-r--. 1 hadoop hadoop      62 May 13 16:06 fsimage_0000000000000000057.md5
-rw-rw-r--. 1 hadoop hadoop     760 May 13 17:06 fsimage_0000000000000000059
-rw-rw-r--. 1 hadoop hadoop      62 May 13 17:06 fsimage_0000000000000000059.md5
-rw-rw-r--. 1 hadoop hadoop       3 May 13 17:06 seen_txid
-rw-rw-r--. 1 hadoop hadoop     204 May 13 15:14 VERSION
[hadoop@org subdir0]$



参考:

Hadoop快速入门

Hadoop安装教程_单机/伪分布式配置_Hadoop2.6.0/Ubuntu14.04_给力星

初学Hadoop之单机模式环境搭建 - 何海洋 - 博客园

Hadoop环境安装设置 - Hadoop教程™

Hadoop快速入门

超详细单机版搭建hadoop环境图文解析_VickyZhang_新浪博客

RHadoop实践系列之一:Hadoop环境搭建

hadoop-(3)hadoop问题汇总

史上最详细的Hadoop环境搭建

CentOS 6.3下Hadoop伪分布式平台搭建

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值