[bruce@iRobot hadoop]$ $HADOOP_PREFIX/bin/hdfs dfs -mkdir /user
[bruce@iRobot hadoop]$ $HADOOP_PREFIX/bin/hdfs dfs -mkdir /user/bruce
[bruce@iRobot hadoop]$ tail -f logs/*-namenode*.log
2015-11-19 15:01:20,442 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 11Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 37
2015-11-19 15:05:34,036 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 3 Total time for transactions(ms): 11Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 48
----------------------------------------------------------------------------
[bruce@iRobot hadoop]$ $HADOOP_PREFIX/bin/hdfs dfs -ls .
15/11/19 15:07:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[bruce@iRobot hadoop]$ ls
1.txt bin bin-mapreduce1 cloudera etc examples examples-mapreduce1 hellohadoop-1.0-SNAPSHOT.jar include lib libexec logs sbin share src
[bruce@iRobot hadoop]$ cat 1.txt
monkey
sea
funny
hello
world
google
cats
worry
over
new
day
[bruce@iRobot hadoop]$ $HADOOP_PREFIX/bin/hdfs dfs -put 1.txt
[bruce@iRobot hadoop]$
(10分钟后执行展示 hdfs的1.txt文件)
[bruce@iRobot hadoop]$ bin/hdfs dfs -ls .
Found 1 items
-rw-r--r-- 1 bruce oinstall 59 2015-11-19 15:07 1.txt
[bruce@iRobot hadoop]$ bin/hdfs dfs -cat 1.txt
monkey
sea
funny
hello
world
google
cats
worry
over
new
day
[bruce@iRobot hadoop]$
[bruce@iRobot hadoop]$ tail -f logs/*-namenode*.log
2015-11-19 15:07:57,606 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/bruce/1.txt._COPYING_. BP-981411196-192.168.100.200-1447912540337 blk_6533808687986917333_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
2015-11-19 15:07:57,858 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 192.168.100.200:50010 is added to blk_6533808687986917333_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]} size 0
2015-11-19 15:07:57,863 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bruce/1.txt._COPYING_ is closed by DFSClient_NONMAPREDUCE_1817581491_1
(执行展示 hdfs的1.txt文件后)
2015-11-19 15:17:17,806 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.100.200
2015-11-19 15:17:17,806 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2015-11-19 15:17:17,806 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3
2015-11-19 15:17:17,806 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 10 Total time for transactions(ms): 14Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 76
2015-11-19 15:17:17,813 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 10 Total time for transactions(ms): 14Number of transactions batched in Syncs: 0 Number of syncs: 8 SyncTimes(ms): 83
2015-11-19 15:17:17,814 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /u01/hadoopdata/hadoop-bruce/dfs/name/current/edits_inprogress_0000000000000000003 -> /u01/hadoopdata/hadoop-bruce/dfs/name/current/edits_0000000000000000003-0000000000000000012
2015-11-19 15:17:17,814 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 13
2015-11-19 15:17:17,949 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://iRobot:50090/getimage?getimage=1&txid=12&storageInfo=-40:337807999:0:CID-4fb81ac3-7e61-42c2-a707-46f2af89a6e2
2015-11-19 15:17:17,968 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.02s at 0.00 KB/s
2015-11-19 15:17:17,968 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000012 size 358 bytes.
2015-11-19 15:17:17,981 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 2
2015-11-19 15:17:17,981 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/u01/hadoopdata/hadoop-bruce/dfs/name/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
[bruce@iRobot hadoop]$ tail -f logs/*datanode*.log
2015-11-19 15:07:57,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_6533808687986917333_1002 src: /192.168.100.200:14244 dest: /192.168.100.200:50010
2015-11-19 15:07:57,858 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:14244, dest: /192.168.100.200:50010, bytes: 59, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1817581491_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_6533808687986917333_1002, duration: 27240542
2015-11-19 15:07:57,858 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_6533808687986917333_1002, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2015-11-19 15:08:01,620 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-981411196-192.168.100.200-1447912540337:blk_6533808687986917333_1002
(执行展示 hdfs的1.txt文件后)
2015-11-19 15:24:55,875 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:14676, bytes: 63, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1463876014_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_6533808687986917333_1002, duration: 1766219
[bruce@iRobot hadoop]$ tail -f logs/*secondarynamenode*.log
(执行展示 hdfs的1.txt文件后)
2015-11-19 15:17:17,868 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Image has not changed. Will not download image.
2015-11-19 15:17:17,869 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://192.168.100.200:50070/getimage?getedit=1&startTxId=3&endTxId=12&storageInfo=-40:337807999:0:CID-4fb81ac3-7e61-42c2-a707-46f2af89a6e2
2015-11-19 15:17:17,889 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.02s at 0.00 KB/s
2015-11-19 15:17:17,889 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file edits_tmp_0000000000000000003-0000000000000000012_0000001447917437869 size 0 bytes.
2015-11-19 15:17:17,890 INFO org.apache.hadoop.hdfs.server.namenode.Checkpointer: Checkpointer about to load edits from 1 stream(s).
2015-11-19 15:17:17,890 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading /u01/hadoopdata/hadoop-bruce/dfs/namesecondary/current/edits_0000000000000000003-0000000000000000012 expecting start txid #3
2015-11-19 15:17:17,899 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /u01/hadoopdata/hadoop-bruce/dfs/namesecondary/current/edits_0000000000000000003-0000000000000000012 of size 544 edits # 10 loaded in 0 seconds
2015-11-19 15:17:17,900 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Saving image file /u01/hadoopdata/hadoop-bruce/dfs/namesecondary/current/fsimage.ckpt_0000000000000000012 using no compression
2015-11-19 15:17:17,919 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 358 saved in 0 seconds.
2015-11-19 15:17:17,941 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 2
2015-11-19 15:17:17,941 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/u01/hadoopdata/hadoop-bruce/dfs/namesecondary/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2015-11-19 15:17:17,946 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://192.168.100.200:50070/getimage?putimage=1&txid=12&port=50090&storageInfo=-40:337807999:0:CID-4fb81ac3-7e61-42c2-a707-46f2af89a6e2
2015-11-19 15:17:17,988 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.04s at 0.00 KB/s
2015-11-19 15:17:17,988 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with txid 12 to namenode at 192.168.100.200:50070
2015-11-19 15:17:17,988 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 358
[bruce@iRobot hadoop]$ ls /u01/hadoopdata/
hadoop-bruce
[bruce@iRobot hadoop]$ ls /u01/hadoopdata/hadoop-bruce/
dfs yarn
[bruce@iRobot hadoop]$ ls /u01/hadoopdata/hadoop-bruce/dfs/
data name namesecondary
[bruce@iRobot hadoop]$ ls /u01/hadoopdata/hadoop-bruce/dfs/name
current in_use.lock
[bruce@iRobot hadoop]$ ls /u01/hadoopdata/hadoop-bruce/dfs/name/current/ -lrt
总用量 1052
-rw-r--r-- 1 bruce oinstall 206 11月 19 13:55 VERSION
-rw-r--r-- 1 bruce oinstall 116 11月 19 13:55 fsimage_0000000000000000000
-rw-r--r-- 1 bruce oinstall 62 11月 19 13:55 fsimage_0000000000000000000.md5
-rw-r--r-- 1 bruce oinstall 30 11月 19 14:17 edits_0000000000000000001-0000000000000000002
-rw-r--r-- 1 bruce oinstall 2 11月 19 14:17 seen_txid
-rw-r--r-- 1 bruce oinstall 116 11月 19 14:17 fsimage_0000000000000000002
-rw-r--r-- 1 bruce oinstall 62 11月 19 14:17 fsimage_0000000000000000002.md5
-rw-r--r-- 1 bruce oinstall 1048576 11月 19 15:07 edits_inprogress_0000000000000000003
[bruce@iRobot hadoop]$
(执行展示 hdfs的1.txt文件后的变化)
[bruce@iRobot hadoop]$ ls /u01/hadoopdata/hadoop-bruce/dfs/name/current/ -lrt
总用量 1056
-rw-r--r-- 1 bruce oinstall 206 11月 19 13:55 VERSION
-rw-r--r-- 1 bruce oinstall 30 11月 19 14:17 edits_0000000000000000001-0000000000000000002
-rw-r--r-- 1 bruce oinstall 116 11月 19 14:17 fsimage_0000000000000000002
-rw-r--r-- 1 bruce oinstall 62 11月 19 14:17 fsimage_0000000000000000002.md5
-rw-r--r-- 1 bruce oinstall 544 11月 19 15:17 edits_0000000000000000003-0000000000000000012
-rw-r--r-- 1 bruce oinstall 1048576 11月 19 15:17 edits_inprogress_0000000000000000013
-rw-r--r-- 1 bruce oinstall 3 11月 19 15:17 seen_txid
-rw-r--r-- 1 bruce oinstall 358 11月 19 15:17 fsimage_0000000000000000012
-rw-r--r-- 1 bruce oinstall 62 11月 19 15:17 fsimage_0000000000000000012.md5
[bruce@iRobot hadoop]$ ls /u01/hadoopdata/hadoop-bruce/dfs/data/ -lrt
总用量 8
-rw-r--r-- 1 bruce oinstall 11 11月 19 14:06 in_use.lock
drwxr-xr-x 3 bruce oinstall 4096 11月 19 14:06 current
[bruce@iRobot hadoop]$ ls -lrt /u01/hadoopdata/hadoop-bruce/dfs/data/current/
总用量 8
-rw-r--r-- 1 bruce oinstall 189 11月 19 14:06 VERSION
drwx------ 4 bruce oinstall 4096 11月 19 15:08 BP-981411196-192.168.100.200-1447912540337
[bruce@iRobot hadoop]$ ls -lrt /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/
总用量 12
drwxr-xr-x 2 bruce oinstall 4096 11月 19 14:06 tmp
drwxr-xr-x 4 bruce oinstall 4096 11月 19 14:06 current
-rw-r--r-- 1 bruce oinstall 96 11月 19 15:08 dncp_block_verification.log.prev
-rw-r--r-- 1 bruce oinstall 0 11月 19 15:08 dncp_block_verification.log.curr
[bruce@iRobot hadoop]$
[bruce@iRobot hadoop]$ cat /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/dncp_block_verification.log.prev
date="2015-11-19 15:08:01,620" time="1447916881620" genstamp="1002" id="6533808687986917333"
[bruce@iRobot hadoop]$
[bruce@iRobot hadoop]$ cat /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/dncp_block_verification.log.curr
[bruce@iRobot hadoop]$ ls -lrt /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/tmp/
总用量 0
[bruce@iRobot hadoop]$ ls -lrt /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/
总用量 12
-rw-r--r-- 1 bruce oinstall 133 11月 19 14:06 VERSION
drwxr-xr-x 2 bruce oinstall 4096 11月 19 15:07 rbw
drwxr-xr-x 2 bruce oinstall 4096 11月 19 15:07 finalized
[bruce@iRobot hadoop]$ ls -lrt /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/rbw/
总用量 0
[bruce@iRobot hadoop]$ ls -lrt /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/
总用量 8
-rw-r--r-- 1 bruce oinstall 11 11月 19 15:07 blk_6533808687986917333_1002.meta
-rw-r--r-- 1 bruce oinstall 59 11月 19 15:07 blk_6533808687986917333
[bruce@iRobot hadoop]$
[bruce@iRobot hadoop]$ cat /u01/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/blk_6533808687986917333
monkey
sea
funny
hello
world
google
cats
worry
over
new
day
[bruce@iRobot hadoop]$
-------------------------------------------------------------------------------------------------------------------------
执行wordcount job: 15:40左右开始执行
[bruce@iRobot hadoop]$ $HADOOP_PREFIX/bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.0.0-cdh4.5.0.jar wordcount 1.txt 1.sort
15/11/19 15:40:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/19 15:40:45 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
15/11/19 15:40:45 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
15/11/19 15:40:46 INFO input.FileInputFormat: Total input paths to process : 1
15/11/19 15:40:46 INFO mapreduce.JobSubmitter: number of splits:1
15/11/19 15:40:46 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/11/19 15:40:46 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
15/11/19 15:40:46 WARN conf.Configuration: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
15/11/19 15:40:46 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
15/11/19 15:40:46 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
15/11/19 15:40:46 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
15/11/19 15:40:46 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
15/11/19 15:40:46 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
15/11/19 15:40:46 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/11/19 15:40:46 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
15/11/19 15:40:46 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
15/11/19 15:40:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1447914055057_0001
15/11/19 15:40:46 INFO client.YarnClientImpl: Submitted application application_1447914055057_0001 to ResourceManager at /192.168.100.200:8032
15/11/19 15:40:46 INFO mapreduce.Job: The url to track the job: http://192.168.100.200:54315/proxy/application_1447914055057_0001/
15/11/19 15:40:46 INFO mapreduce.Job: Running job: job_1447914055057_0001
15/11/19 15:40:55 INFO mapreduce.Job: Job job_1447914055057_0001 running in uber mode : false
15/11/19 15:40:55 INFO mapreduce.Job: map 0% reduce 0%
15/11/19 15:40:59 INFO mapreduce.Job: map 100% reduce 0%
15/11/19 15:41:04 INFO mapreduce.Job: map 100% reduce 100%
15/11/19 15:41:04 INFO mapreduce.Job: Job job_1447914055057_0001 completed successfully
15/11/19 15:41:04 INFO mapreduce.Job: Counters: 43
File System Counters
FILE: Number of bytes read=97
FILE: Number of bytes written=145627
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=166
HDFS: Number of bytes written=59
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=2610
Total time spent by all reduces in occupied slots (ms)=3157
Map-Reduce Framework
Map input records=11
Map output records=11
Map output bytes=103
Map output materialized bytes=97
Input split bytes=107
Combine input records=11
Combine output records=8
Reduce input groups=8
Reduce shuffle bytes=97
Reduce input records=8
Reduce output records=8
Spilled Records=16
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=37
CPU time spent (ms)=2290
Physical memory (bytes) snapshot=373596160
Virtual memory (bytes) snapshot=1467797504
Total committed heap usage (bytes)=327876608
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=59
File Output Format Counters
Bytes Written=59
[bruce@iRobot hadoop]$
[bruce@iRobot hadoop]$ tail -f logs/*-namenode*.log
(15:40左右开始执行后)
2015-11-19 15:40:45,789 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 32
2015-11-19 15:40:46,012 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1447914055057_0001/job.jar. BP-981411196-192.168.100.200-1447912540337 blk_-2593472465878570135_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
2015-11-19 15:40:46,157 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 192.168.100.200:50010 is added to blk_-2593472465878570135_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]} size 0
2015-11-19 15:40:46,161 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1447914055057_0001/job.jar is closed by DFSClient_NONMAPREDUCE_1732184435_1
2015-11-19 15:40:46,172 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 1 to 10 for /tmp/hadoop-yarn/staging/bruce/.staging/job_1447914055057_0001/job.jar
2015-11-19 15:40:46,258 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 1 to 10 for /tmp/hadoop-yarn/staging/bruce/.staging/job_1447914055057_0001/job.split
2015-11-19 15:40:46,275 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1447914055057_0001/job.split. BP-981411196-192.168.100.200-1447912540337 blk_1039308966393142348_1006{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
2015-11-19 15:40:46,283 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 192.168.100.200:50010 is added to blk_1039308966393142348_1006{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]} size 0
2015-11-19 15:40:46,285 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1447914055057_0001/job.split is closed by DFSClient_NONMAPREDUCE_1732184435_1
2015-11-19 15:40:46,303 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1447914055057_0001/job.splitmetainfo. BP-981411196-192.168.100.200-1447912540337 blk_2736067084384974939_1008{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[
Hadoop系列三:运行job过程日志
最新推荐文章于 2024-08-02 10:46:51 发布
本文详细记录了在Hadoop环境中创建、上传、读取1.txt文件的过程,通过`hdfs dfs`命令及日志跟踪,展示了HDFS的文件操作流程,包括namenode、datanode和secondarynamenode的日志变化,揭示了Hadoop集群中数据存储和读取的内部细节。
摘要由CSDN通过智能技术生成