Hadoop MapReduce Job提交后的交互日志

版权声明:泥瓦匠做个毛坯房,欢迎参观。如果觉得有用,转载请注明出处。 https://blog.csdn.net/jollypigclub/article/details/50233635


通过混合 NamdeNode, DataNode, ResourceManager,NodeManager的日志输出, 和 提交MapReduce Job的console输出日志,形成时间序列上的日志输出。

这样可以方便查看从client端提交job,整个hadoop的内部交互的大致过程。


MapReduce Job的console输出来自执行简单编写的例子: bin/hadoop jar   hellohadoop-1.0-SNAPSHOT.jar  WordCount demo  out  后的输出。


HDFS文件中保存job的位置在 /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001下。

可以看到文件:

appTokens

job.jar

job.split

job.splitmetainfo

job.xml


[         MapRedJob.txt] 2015-12-04 14:42:10,000 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
[         MapRedJob.txt] 2015-12-04 14:42:10,000 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
[   ResourceManager.txt] 2015-12-04 14:42:10,976 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
[         MapRedJob.txt] 2015-12-04 14:42:11,000 INFO input.FileInputFormat: Total input paths to process : 1
[         MapRedJob.txt] 2015-12-04 14:42:11,000 INFO mapreduce.JobSubmitter: number of splits:1
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
[         MapRedJob.txt] 2015-12-04 14:42:11,000 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
[         MapRedJob.txt] 2015-12-04 14:42:11,000 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1449210909990_0001
[          NameNode.txt] 2015-12-04 14:42:11,108 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.jar. BP-981411196-192.168.100.200-1447912540337 blk_-3104879886624780072_13430{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:11,260 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-3104879886624780072_13430 src: /192.168.100.200:49618 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:11,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49618, dest: /192.168.100.200:50010, bytes: 11876, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-465584270_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-3104879886624780072_13430, duration: 28404622
[          DataNode.txt] 2015-12-04 14:42:11,323 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-3104879886624780072_13430, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:11,328 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.jar is closed by DFSClient_NONMAPREDUCE_-465584270_1
[          NameNode.txt] 2015-12-04 14:42:11,341 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 1 to 10 for /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.jar
[          NameNode.txt] 2015-12-04 14:42:11,433 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Increasing replication from 1 to 10 for /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.split
[          NameNode.txt] 2015-12-04 14:42:11,453 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.split. BP-981411196-192.168.100.200-1447912540337 blk_3649142917674748753_13432{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:11,457 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_3649142917674748753_13432 src: /192.168.100.200:49619 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:11,467 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49619, dest: /192.168.100.200:50010, bytes: 150, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-465584270_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_3649142917674748753_13432, duration: 7871325
[          DataNode.txt] 2015-12-04 14:42:11,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_3649142917674748753_13432, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:11,474 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.split is closed by DFSClient_NONMAPREDUCE_-465584270_1
[          NameNode.txt] 2015-12-04 14:42:11,502 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.splitmetainfo. BP-981411196-192.168.100.200-1447912540337 blk_-2734758558562150374_13434{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:11,506 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-2734758558562150374_13434 src: /192.168.100.200:49620 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:11,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49620, dest: /192.168.100.200:50010, bytes: 22, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-465584270_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-2734758558562150374_13434, duration: 6621976
[          DataNode.txt] 2015-12-04 14:42:11,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-2734758558562150374_13434, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:11,517 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.splitmetainfo is closed by DFSClient_NONMAPREDUCE_-465584270_1
[          NameNode.txt] 2015-12-04 14:42:11,652 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.xml. BP-981411196-192.168.100.200-1447912540337 blk_-1097534736843614414_13436{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:11,656 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-1097534736843614414_13436 src: /192.168.100.200:49621 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:11,661 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49621, dest: /192.168.100.200:50010, bytes: 60735, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-465584270_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-1097534736843614414_13436, duration: 3539086
[          DataNode.txt] 2015-12-04 14:42:11,662 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-1097534736843614414_13436, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:11,664 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.xml is closed by DFSClient_NONMAPREDUCE_-465584270_1
[          NameNode.txt] 2015-12-04 14:42:11,751 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/appTokens. BP-981411196-192.168.100.200-1447912540337 blk_4998228328493697677_13438{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:11,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_4998228328493697677_13438 src: /192.168.100.200:49622 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:11,759 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49622, dest: /192.168.100.200:50010, bytes: 7, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-465584270_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_4998228328493697677_13438, duration: 2889904
[          DataNode.txt] 2015-12-04 14:42:11,759 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_4998228328493697677_13438, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:11,761 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/appTokens is closed by DFSClient_NONMAPREDUCE_-465584270_1
[         MapRedJob.txt] 2015-12-04 14:42:12,000 INFO client.YarnClientImpl: Submitted application application_1449210909990_0001 to ResourceManager at /192.168.100.200:8032
[         MapRedJob.txt] 2015-12-04 14:42:12,000 INFO mapreduce.Job: The url to track the job: http://192.168.100.200:54315/proxy/application_1449210909990_0001/
[         MapRedJob.txt] 2015-12-04 14:42:12,000 INFO mapreduce.Job: Running job: job_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:12,050 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Storing Application with id application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:12,053 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:12,054 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user bruce
[   ResourceManager.txt] 2015-12-04 14:42:12,057 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Submit Application Request	TARGET=ClientRMService	RESULT=SUCCESS	APPID=application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:12,085 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1449210909990_0001 State change from NEW to SUBMITTED
[   ResourceManager.txt] 2015-12-04 14:42:12,087 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering appattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:12,089 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from NEW to SUBMITTED
[   ResourceManager.txt] 2015-12-04 14:42:12,116 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application Submission: application_1449210909990_0001 from bruce, currently active: 1
[   ResourceManager.txt] 2015-12-04 14:42:12,122 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from SUBMITTED to SCHEDULED
[   ResourceManager.txt] 2015-12-04 14:42:12,122 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1449210909990_0001 State change from SUBMITTED to ACCEPTED
[   ResourceManager.txt] 2015-12-04 14:42:12,433 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000001 Container Transitioned from NEW to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:12,433 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000001
[   ResourceManager.txt] 2015-12-04 14:42:12,434 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1449210909990_0001_01_000001 of capacity <memory:2048, vCores:1> on host irobot:11000, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:2048, vCores:7> available
[   ResourceManager.txt] 2015-12-04 14:42:12,436 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
[   ResourceManager.txt] 2015-12-04 14:42:12,441 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1449210909990_0001 AttemptId: appattempt_1449210909990_0001_000001 MasterContainer: Container: [ContainerId: container_1449210909990_0001_01_000001, NodeId: irobot:11000, NodeHttpAddress: irobot:8042, Resource: <memory:2048, vCores:1>, Priority: 0, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_NEW, ]
[   ResourceManager.txt] 2015-12-04 14:42:12,444 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING
[   ResourceManager.txt] 2015-12-04 14:42:12,447 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for attempt: appattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:12,450 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:12,455 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:12,481 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1449210909990_0001_01_000001, NodeId: irobot:11000, NodeHttpAddress: irobot:8042, Resource: <memory:2048, vCores:1>, Priority: 0, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_NEW, ] for AM appattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:12,481 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1449210909990_0001_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.mapreduce.container.log.dir=<LOG_DIR> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
[       NodeManager.txt] 2015-12-04 14:42:12,832 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1449210909990_0001_01_000001 by user bruce
[       NodeManager.txt] 2015-12-04 14:42:12,861 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:12,866 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:12,871 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1449210909990_0001 transitioned from NEW to INITING
[       NodeManager.txt] 2015-12-04 14:42:12,873 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1449210909990_0001_01_000001 to application application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:12,878 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1449210909990_0001_01_000001, NodeId: irobot:11000, NodeHttpAddress: irobot:8042, Resource: <memory:2048, vCores:1>, Priority: 0, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_NEW, ] for AM appattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:12,878 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from ALLOCATED to LAUNCHED
[       NodeManager.txt] 2015-12-04 14:42:12,884 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1449210909990_0001 transitioned from INITING to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:12,893 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000001 transitioned from NEW to LOCALIZING
[       NodeManager.txt] 2015-12-04 14:42:12,902 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/appTokens transitioned from INIT to DOWNLOADING
[       NodeManager.txt] 2015-12-04 14:42:12,902 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.jar transitioned from INIT to DOWNLOADING
[       NodeManager.txt] 2015-12-04 14:42:12,902 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.splitmetainfo transitioned from INIT to DOWNLOADING
[       NodeManager.txt] 2015-12-04 14:42:12,902 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.split transitioned from INIT to DOWNLOADING
[       NodeManager.txt] 2015-12-04 14:42:12,902 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.xml transitioned from INIT to DOWNLOADING
[       NodeManager.txt] 2015-12-04 14:42:12,903 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:13,030 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/nmPrivate/container_1449210909990_0001_01_000001.tokens. Credentials list:
[       NodeManager.txt] 2015-12-04 14:42:13,034 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user bruce
[       NodeManager.txt] 2015-12-04 14:42:13,067 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/nmPrivate/container_1449210909990_0001_01_000001.tokens to /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000001.tokens
[       NodeManager.txt] 2015-12-04 14:42:13,067 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set to /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001 = file:/uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:13,432 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:13,438 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING
[          DataNode.txt] 2015-12-04 14:42:13,613 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49626, bytes: 11, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1962415803_67, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_4998228328493697677_13438, duration: 1909552
[       NodeManager.txt] 2015-12-04 14:42:13,689 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/appTokens transitioned from DOWNLOADING to LOCALIZED
[          DataNode.txt] 2015-12-04 14:42:13,712 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49626, bytes: 11972, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1962415803_67, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-3104879886624780072_13430, duration: 272905
[       NodeManager.txt] 2015-12-04 14:42:13,743 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.jar transitioned from DOWNLOADING to LOCALIZED
[          DataNode.txt] 2015-12-04 14:42:13,762 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49626, bytes: 26, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1962415803_67, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-2734758558562150374_13434, duration: 188281
[       NodeManager.txt] 2015-12-04 14:42:13,786 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.splitmetainfo transitioned from DOWNLOADING to LOCALIZED
[          DataNode.txt] 2015-12-04 14:42:13,804 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49626, bytes: 154, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1962415803_67, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_3649142917674748753_13432, duration: 292103
[       NodeManager.txt] 2015-12-04 14:42:13,858 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.split transitioned from DOWNLOADING to LOCALIZED
[          DataNode.txt] 2015-12-04 14:42:13,877 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49626, bytes: 61211, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-1962415803_67, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-1097534736843614414_13436, duration: 334905
[       NodeManager.txt] 2015-12-04 14:42:13,907 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://192.168.100.200:9000/tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001/job.xml transitioned from DOWNLOADING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:13,909 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000001 transitioned from LOCALIZING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:13,952 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000001 transitioned from LOCALIZED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:13,959 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000001/default_container_executor.sh]
[       NodeManager.txt] 2015-12-04 14:42:14,220 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:14,305 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 40.6 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:14,438 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:15,442 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:16,445 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:17,361 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 184.7 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:17,448 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[          DataNode.txt] 2015-12-04 14:42:17,813 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49631, bytes: 26, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-2734758558562150374_13434, duration: 317197
[       NodeManager.txt] 2015-12-04 14:42:18,451 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:18,757 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:18,758 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Register App Master	TARGET=ApplicationMasterService	RESULT=SUCCESS	APPID=application_1449210909990_0001	APPATTEMPTID=appattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:18,759 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from LAUNCHED to RUNNING
[   ResourceManager.txt] 2015-12-04 14:42:18,759 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1449210909990_0001 State change from ACCEPTED to RUNNING
[          NameNode.txt] 2015-12-04 14:42:19,031 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001_1_conf.xml. BP-981411196-192.168.100.200-1447912540337 blk_-9059741398368895896_13441{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:19,087 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-9059741398368895896_13441 src: /192.168.100.200:49635 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:19,097 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49635, dest: /192.168.100.200:50010, bytes: 70809, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-9059741398368895896_13441, duration: 8634961
[          DataNode.txt] 2015-12-04 14:42:19,098 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-9059741398368895896_13441, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:19,101 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001_1_conf.xml is closed by DFSClient_NONMAPREDUCE_-64196293_1
[       NodeManager.txt] 2015-12-04 14:42:19,455 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[         MapRedJob.txt] 2015-12-04 14:42:20,000 INFO mapreduce.Job: Job job_1449210909990_0001 running in uber mode : false
[         MapRedJob.txt] 2015-12-04 14:42:20,000 INFO mapreduce.Job:  map 0% reduce 0%
[       NodeManager.txt] 2015-12-04 14:42:20,415 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 262.9 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:20,458 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:20,461 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000002 Container Transitioned from NEW to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:20,461 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000002
[   ResourceManager.txt] 2015-12-04 14:42:20,462 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1449210909990_0001_01_000002 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 2 containers, <memory:3072, vCores:2> used and <memory:1024, vCores:6> available
[   ResourceManager.txt] 2015-12-04 14:42:20,820 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000002 Container Transitioned from ALLOCATED to ACQUIRED
[       NodeManager.txt] 2015-12-04 14:42:20,969 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1449210909990_0001_01_000002 by user bruce
[       NodeManager.txt] 2015-12-04 14:42:20,970 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000002
[       NodeManager.txt] 2015-12-04 14:42:20,973 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1449210909990_0001_01_000002 to application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:20,976 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000002 transitioned from NEW to LOCALIZING
[       NodeManager.txt] 2015-12-04 14:42:20,976 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:20,977 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce.shuffle
[       NodeManager.txt] 2015-12-04 14:42:20,981 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:20,981 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000002 transitioned from LOCALIZING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:21,012 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000002 transitioned from LOCALIZED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:21,018 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000002/default_container_executor.sh]
[       NodeManager.txt] 2015-12-04 14:42:21,462 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:21,463 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 2, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:21,466 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000002 Container Transitioned from ACQUIRED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:22,466 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:22,467 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 2, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[          DataNode.txt] 2015-12-04 14:42:23,217 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49641, bytes: 154, op: HDFS_READ, cliID: DFSClient_attempt_1449210909990_0001_m_000000_0_56561954_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_3649142917674748753_13432, duration: 301850
[          DataNode.txt] 2015-12-04 14:42:23,332 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49641, bytes: 39302, op: HDFS_READ, cliID: DFSClient_attempt_1449210909990_0001_m_000000_0_56561954_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_1205092977041424139_13154, duration: 496018
[       NodeManager.txt] 2015-12-04 14:42:23,415 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1449210909990_0001_01_000002
[       NodeManager.txt] 2015-12-04 14:42:23,465 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 271.5 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:23,470 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:23,470 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 2, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:23,510 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5074 for container-id container_1449210909990_0001_01_000002: 216.9 MB of 1 GB physical memory used; 699.1 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:23,606 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Stop Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000002
[       NodeManager.txt] 2015-12-04 14:42:23,609 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000002 transitioned from RUNNING to KILLING
[       NodeManager.txt] 2015-12-04 14:42:23,610 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1449210909990_0001_01_000002
[       NodeManager.txt] 2015-12-04 14:42:23,610 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[          NameNode.txt] 2015-12-04 14:42:23,640 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001_1.jhist. BP-981411196-192.168.100.200-1447912540337 blk_6129522205581518625_13442{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:23,643 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_6129522205581518625_13442 src: /192.168.100.200:49643 dest: /192.168.100.200:50010
[          NameNode.txt] 2015-12-04 14:42:23,651 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001_1.jhist for DFSClient_NONMAPREDUCE_-64196293_1
[       NodeManager.txt] 2015-12-04 14:42:23,653 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 2, }, state: C_RUNNING, diagnostics: "Container killed by the ApplicationMaster.\n", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:23,776 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from task is : 143
[       NodeManager.txt] 2015-12-04 14:42:23,776 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
[       NodeManager.txt] 2015-12-04 14:42:23,781 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000002 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
[       NodeManager.txt] 2015-12-04 14:42:23,784 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000002
[       NodeManager.txt] 2015-12-04 14:42:23,784 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	OPERATION=Container Finished - Killed	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000002
[       NodeManager.txt] 2015-12-04 14:42:23,787 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000002 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
[       NodeManager.txt] 2015-12-04 14:42:23,787 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1449210909990_0001_01_000002 from application application_1449210909990_0001
[         MapRedJob.txt] 2015-12-04 14:42:24,000 INFO mapreduce.Job:  map 100% reduce 0%
[       NodeManager.txt] 2015-12-04 14:42:24,657 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:24,658 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 2, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143,
[       NodeManager.txt] 2015-12-04 14:42:24,658 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1449210909990_0001_01_000002
[   ResourceManager.txt] 2015-12-04 14:42:24,664 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000002 Container Transitioned from RUNNING to COMPLETED
[   ResourceManager.txt] 2015-12-04 14:42:24,664 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1449210909990_0001_01_000002 in state: COMPLETED event:FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:24,664 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000002
[   ResourceManager.txt] 2015-12-04 14:42:24,665 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1449210909990_0001_01_000002 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:2048, vCores:7> available, release resources=true
[   ResourceManager.txt] 2015-12-04 14:42:24,665 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application appattempt_1449210909990_0001_000001 released container container_1449210909990_0001_01_000002 on node: host: irobot:11000 #containers=1 available=2048 used=2048 with event: FINISHED
[       NodeManager.txt] 2015-12-04 14:42:25,663 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:25,666 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000003 Container Transitioned from NEW to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:25,666 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000003
[   ResourceManager.txt] 2015-12-04 14:42:25,666 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1449210909990_0001_01_000003 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 2 containers, <memory:3072, vCores:2> used and <memory:1024, vCores:6> available
[   ResourceManager.txt] 2015-12-04 14:42:25,667 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000004 Container Transitioned from NEW to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:25,667 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000004
[   ResourceManager.txt] 2015-12-04 14:42:25,667 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1449210909990_0001_01_000004 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 3 containers, <memory:4096, vCores:3> used and <memory:0, vCores:5> available
[   ResourceManager.txt] 2015-12-04 14:42:25,886 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000003 Container Transitioned from ALLOCATED to ACQUIRED
[   ResourceManager.txt] 2015-12-04 14:42:25,887 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000004 Container Transitioned from ALLOCATED to ACQUIRED
[       NodeManager.txt] 2015-12-04 14:42:25,915 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1449210909990_0001_01_000003 by user bruce
[       NodeManager.txt] 2015-12-04 14:42:25,915 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1449210909990_0001_01_000004 by user bruce
[       NodeManager.txt] 2015-12-04 14:42:25,916 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:25,917 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1449210909990_0001_01_000003 to application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:25,917 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:25,918 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1449210909990_0001_01_000004 to application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:25,919 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000003 transitioned from NEW to LOCALIZING
[       NodeManager.txt] 2015-12-04 14:42:25,919 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000004 transitioned from NEW to LOCALIZING
[       NodeManager.txt] 2015-12-04 14:42:25,920 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:25,920 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce.shuffle
[       NodeManager.txt] 2015-12-04 14:42:25,920 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:25,920 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:25,920 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce.shuffle
[       NodeManager.txt] 2015-12-04 14:42:25,921 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:25,921 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000003 transitioned from LOCALIZING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:25,922 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000004 transitioned from LOCALIZING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:25,954 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000003 transitioned from LOCALIZED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:25,955 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000004 transitioned from LOCALIZED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:25,962 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000003/default_container_executor.sh]
[       NodeManager.txt] 2015-12-04 14:42:25,963 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000004/default_container_executor.sh]
[       NodeManager.txt] 2015-12-04 14:42:26,511 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:26,511 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:26,511 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1449210909990_0001_01_000002
[       NodeManager.txt] 2015-12-04 14:42:26,569 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5135 for container-id container_1449210909990_0001_01_000003: 50.7 MB of 1 GB physical memory used; 685.8 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:26,615 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5137 for container-id container_1449210909990_0001_01_000004: 52.4 MB of 1 GB physical memory used; 685.8 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:26,656 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 261.9 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:26,666 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:26,667 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 3, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:26,668 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 4, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:26,671 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000003 Container Transitioned from ACQUIRED to RUNNING
[   ResourceManager.txt] 2015-12-04 14:42:26,671 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000004 Container Transitioned from ACQUIRED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:27,671 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:27,671 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 3, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:27,672 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 4, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:28,675 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:28,675 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 3, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:28,676 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 4, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:29,678 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:29,679 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 3, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:29,679 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 4, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:29,713 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5135 for container-id container_1449210909990_0001_01_000003: 153.1 MB of 1 GB physical memory used; 705.2 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:29,763 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5137 for container-id container_1449210909990_0001_01_000004: 154.4 MB of 1 GB physical memory used; 705.2 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:29,811 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 262.0 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
[          NameNode.txt] 2015-12-04 14:42:29,887 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000001_0/part-r-00001. BP-981411196-192.168.100.200-1447912540337 blk_3793823084731085224_13445{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          NameNode.txt] 2015-12-04 14:42:29,891 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000000_0/part-r-00000. BP-981411196-192.168.100.200-1447912540337 blk_-2716326799970820606_13446{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:29,989 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_3793823084731085224_13445 src: /192.168.100.200:49657 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:29,990 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-2716326799970820606_13446 src: /192.168.100.200:49658 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:30,010 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49657, dest: /192.168.100.200:50010, bytes: 2240, op: HDFS_WRITE, cliID: DFSClient_attempt_1449210909990_0001_r_000001_0_1444032603_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_3793823084731085224_13445, duration: 17366545
[          DataNode.txt] 2015-12-04 14:42:30,010 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_3793823084731085224_13445, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:30,013 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* checkFileProgress: blk_3793823084731085224_13445{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]} has not reached minimal replication 1
[          DataNode.txt] 2015-12-04 14:42:30,017 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49658, dest: /192.168.100.200:50010, bytes: 2555, op: HDFS_WRITE, cliID: DFSClient_attempt_1449210909990_0001_r_000000_0_-1346111399_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-2716326799970820606_13446, duration: 21139476
[          DataNode.txt] 2015-12-04 14:42:30,017 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-2716326799970820606_13446, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:30,020 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000000_0/part-r-00000 is closed by DFSClient_attempt_1449210909990_0001_r_000000_0_-1346111399_1
[       NodeManager.txt] 2015-12-04 14:42:30,202 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Stop Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:30,205 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000003 transitioned from RUNNING to KILLING
[       NodeManager.txt] 2015-12-04 14:42:30,205 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:30,206 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:30,207 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 3, }, state: C_RUNNING, diagnostics: "Container killed by the ApplicationMaster.\n", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:30,208 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 4, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[          NameNode.txt] 2015-12-04 14:42:30,417 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000001_0/part-r-00001 is closed by DFSClient_attempt_1449210909990_0001_r_000001_0_1444032603_1
[       NodeManager.txt] 2015-12-04 14:42:30,466 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from task is : 143
[       NodeManager.txt] 2015-12-04 14:42:30,466 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
[       NodeManager.txt] 2015-12-04 14:42:30,466 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000003 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
[       NodeManager.txt] 2015-12-04 14:42:30,467 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:30,467 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	OPERATION=Container Finished - Killed	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:30,468 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000003 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
[       NodeManager.txt] 2015-12-04 14:42:30,468 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1449210909990_0001_01_000003 from application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:30,596 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Stop Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:30,597 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000004 transitioned from RUNNING to KILLING
[       NodeManager.txt] 2015-12-04 14:42:30,597 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:30,598 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:30,607 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 3, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143,
[       NodeManager.txt] 2015-12-04 14:42:30,607 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:30,609 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 4, }, state: C_RUNNING, diagnostics: "Container killed by the ApplicationMaster.\n", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:30,620 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000003 Container Transitioned from RUNNING to COMPLETED
[   ResourceManager.txt] 2015-12-04 14:42:30,620 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1449210909990_0001_01_000003 in state: COMPLETED event:FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:30,620 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000003
[   ResourceManager.txt] 2015-12-04 14:42:30,621 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1449210909990_0001_01_000003 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 2 containers, <memory:3072, vCores:2> used and <memory:1024, vCores:6> available, release resources=true
[   ResourceManager.txt] 2015-12-04 14:42:30,621 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application appattempt_1449210909990_0001_000001 released container container_1449210909990_0001_01_000003 on node: host: irobot:11000 #containers=2 available=1024 used=3072 with event: FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:30,621 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000005 Container Transitioned from NEW to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:30,621 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000005
[   ResourceManager.txt] 2015-12-04 14:42:30,622 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1449210909990_0001_01_000005 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 3 containers, <memory:4096, vCores:3> used and <memory:0, vCores:5> available
[       NodeManager.txt] 2015-12-04 14:42:30,786 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from task is : 143
[       NodeManager.txt] 2015-12-04 14:42:30,786 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
[       NodeManager.txt] 2015-12-04 14:42:30,787 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000004 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
[       NodeManager.txt] 2015-12-04 14:42:30,788 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:30,788 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	OPERATION=Container Finished - Killed	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:30,789 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000004 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
[       NodeManager.txt] 2015-12-04 14:42:30,789 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1449210909990_0001_01_000004 from application application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:30,909 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000005 Container Transitioned from ALLOCATED to ACQUIRED
[       NodeManager.txt] 2015-12-04 14:42:30,926 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1449210909990_0001_01_000005 by user bruce
[       NodeManager.txt] 2015-12-04 14:42:30,926 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:30,927 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1449210909990_0001_01_000005 to application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:30,928 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000005 transitioned from NEW to LOCALIZING
[       NodeManager.txt] 2015-12-04 14:42:30,928 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:30,929 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce.shuffle
[       NodeManager.txt] 2015-12-04 14:42:30,929 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:30,929 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000005 transitioned from LOCALIZING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:30,961 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000005 transitioned from LOCALIZED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:30,970 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000005/default_container_executor.sh]
[         MapRedJob.txt] 2015-12-04 14:42:31,000 INFO mapreduce.Job:  map 100% reduce 40%
[       NodeManager.txt] 2015-12-04 14:42:31,621 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:31,622 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 4, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143,
[       NodeManager.txt] 2015-12-04 14:42:31,622 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:31,623 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 5, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:31,626 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000005 Container Transitioned from ACQUIRED to RUNNING
[   ResourceManager.txt] 2015-12-04 14:42:31,627 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000004 Container Transitioned from RUNNING to COMPLETED
[   ResourceManager.txt] 2015-12-04 14:42:31,627 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1449210909990_0001_01_000004 in state: COMPLETED event:FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:31,627 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000004
[   ResourceManager.txt] 2015-12-04 14:42:31,627 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1449210909990_0001_01_000004 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 2 containers, <memory:3072, vCores:2> used and <memory:1024, vCores:6> available, release resources=true
[   ResourceManager.txt] 2015-12-04 14:42:31,627 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application appattempt_1449210909990_0001_000001 released container container_1449210909990_0001_01_000004 on node: host: irobot:11000 #containers=2 available=1024 used=3072 with event: FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:31,628 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000006 Container Transitioned from NEW to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:31,628 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000006
[   ResourceManager.txt] 2015-12-04 14:42:31,628 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1449210909990_0001_01_000006 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 3 containers, <memory:4096, vCores:3> used and <memory:0, vCores:5> available
[   ResourceManager.txt] 2015-12-04 14:42:31,919 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000006 Container Transitioned from ALLOCATED to ACQUIRED
[       NodeManager.txt] 2015-12-04 14:42:31,928 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1449210909990_0001_01_000006 by user bruce
[       NodeManager.txt] 2015-12-04 14:42:31,929 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:31,929 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1449210909990_0001_01_000006 to application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:31,929 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000006 transitioned from NEW to LOCALIZING
[       NodeManager.txt] 2015-12-04 14:42:31,930 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:31,930 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce.shuffle
[       NodeManager.txt] 2015-12-04 14:42:31,930 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:31,930 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000006 transitioned from LOCALIZING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:31,959 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000006 transitioned from LOCALIZED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:31,968 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000006/default_container_executor.sh]
[       NodeManager.txt] 2015-12-04 14:42:32,626 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:32,627 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 5, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:32,628 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 6, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:32,631 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000006 Container Transitioned from ACQUIRED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:32,811 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:32,811 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:32,811 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1449210909990_0001_01_000003
[       NodeManager.txt] 2015-12-04 14:42:32,811 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1449210909990_0001_01_000004
[       NodeManager.txt] 2015-12-04 14:42:32,856 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5295 for container-id container_1449210909990_0001_01_000006: 72.9 MB of 1 GB physical memory used; 688.9 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:32,904 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5270 for container-id container_1449210909990_0001_01_000005: 112.2 MB of 1 GB physical memory used; 695.1 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:32,945 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 262.3 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:33,631 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:33,632 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 5, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:33,632 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 6, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:34,635 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:34,636 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 5, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:34,636 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 6, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[          NameNode.txt] 2015-12-04 14:42:34,767 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000002_0/part-r-00002. BP-981411196-192.168.100.200-1447912540337 blk_-8495708323858144996_13448{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:34,868 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-8495708323858144996_13448 src: /192.168.100.200:49671 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:34,896 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49671, dest: /192.168.100.200:50010, bytes: 2555, op: HDFS_WRITE, cliID: DFSClient_attempt_1449210909990_0001_r_000002_0_-870471466_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-8495708323858144996_13448, duration: 26831085
[          DataNode.txt] 2015-12-04 14:42:34,896 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-8495708323858144996_13448, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:34,899 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000002_0/part-r-00002 is closed by DFSClient_attempt_1449210909990_0001_r_000002_0_-870471466_1
[       NodeManager.txt] 2015-12-04 14:42:35,071 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Stop Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:35,072 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000005 transitioned from RUNNING to KILLING
[       NodeManager.txt] 2015-12-04 14:42:35,072 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:35,072 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:35,073 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 5, }, state: C_RUNNING, diagnostics: "Container killed by the ApplicationMaster.\n", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:35,073 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 6, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:35,312 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from task is : 143
[       NodeManager.txt] 2015-12-04 14:42:35,313 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
[       NodeManager.txt] 2015-12-04 14:42:35,313 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000005 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
[       NodeManager.txt] 2015-12-04 14:42:35,314 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:35,314 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	OPERATION=Container Finished - Killed	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:35,315 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000005 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
[       NodeManager.txt] 2015-12-04 14:42:35,315 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1449210909990_0001_01_000005 from application application_1449210909990_0001
[          NameNode.txt] 2015-12-04 14:42:35,748 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000003_0/part-r-00003. BP-981411196-192.168.100.200-1447912540337 blk_-3671039127630568325_13450{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:35,858 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-3671039127630568325_13450 src: /192.168.100.200:49675 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:35,875 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49675, dest: /192.168.100.200:50010, bytes: 2380, op: HDFS_WRITE, cliID: DFSClient_attempt_1449210909990_0001_r_000003_0_-1651998447_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-3671039127630568325_13450, duration: 15004114
[          DataNode.txt] 2015-12-04 14:42:35,875 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-3671039127630568325_13450, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:35,878 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000003_0/part-r-00003 is closed by DFSClient_attempt_1449210909990_0001_r_000003_0_-1651998447_1
[       NodeManager.txt] 2015-12-04 14:42:35,946 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:35,986 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5295 for container-id container_1449210909990_0001_01_000006: 168.3 MB of 1 GB physical memory used; 705.2 MB of 2.1 GB virtual memory used
[         MapRedJob.txt] 2015-12-04 14:42:36,000 INFO mapreduce.Job:  map 100% reduce 60%
[       NodeManager.txt] 2015-12-04 14:42:36,027 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 262.4 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:36,049 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Stop Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:36,050 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000006 transitioned from RUNNING to KILLING
[       NodeManager.txt] 2015-12-04 14:42:36,050 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:36,051 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:36,051 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 5, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143,
[       NodeManager.txt] 2015-12-04 14:42:36,059 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1449210909990_0001_01_000005
[       NodeManager.txt] 2015-12-04 14:42:36,060 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 6, }, state: C_RUNNING, diagnostics: "Container killed by the ApplicationMaster.\n", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:36,071 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000005 Container Transitioned from RUNNING to COMPLETED
[   ResourceManager.txt] 2015-12-04 14:42:36,071 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1449210909990_0001_01_000005 in state: COMPLETED event:FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:36,072 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000005
[   ResourceManager.txt] 2015-12-04 14:42:36,072 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1449210909990_0001_01_000005 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 2 containers, <memory:3072, vCores:2> used and <memory:1024, vCores:6> available, release resources=true
[   ResourceManager.txt] 2015-12-04 14:42:36,072 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application appattempt_1449210909990_0001_000001 released container container_1449210909990_0001_01_000005 on node: host: irobot:11000 #containers=2 available=1024 used=3072 with event: FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:36,072 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000007 Container Transitioned from NEW to ALLOCATED
[   ResourceManager.txt] 2015-12-04 14:42:36,073 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Allocated Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000007
[   ResourceManager.txt] 2015-12-04 14:42:36,073 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Assigned container container_1449210909990_0001_01_000007 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 3 containers, <memory:4096, vCores:3> used and <memory:0, vCores:5> available
[       NodeManager.txt] 2015-12-04 14:42:36,203 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from task is : 143
[       NodeManager.txt] 2015-12-04 14:42:36,204 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
[       NodeManager.txt] 2015-12-04 14:42:36,204 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000006 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
[       NodeManager.txt] 2015-12-04 14:42:36,205 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:36,205 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	OPERATION=Container Finished - Killed	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:36,206 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000006 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
[       NodeManager.txt] 2015-12-04 14:42:36,206 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1449210909990_0001_01_000006 from application application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:36,938 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000007 Container Transitioned from ALLOCATED to ACQUIRED
[       NodeManager.txt] 2015-12-04 14:42:36,950 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1449210909990_0001_01_000007 by user bruce
[       NodeManager.txt] 2015-12-04 14:42:36,951 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000007
[       NodeManager.txt] 2015-12-04 14:42:36,952 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1449210909990_0001_01_000007 to application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:36,953 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000007 transitioned from NEW to LOCALIZING
[       NodeManager.txt] 2015-12-04 14:42:36,953 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:36,953 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce.shuffle
[       NodeManager.txt] 2015-12-04 14:42:36,953 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:36,954 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000007 transitioned from LOCALIZING to LOCALIZED
[       NodeManager.txt] 2015-12-04 14:42:36,982 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000007 transitioned from LOCALIZED to RUNNING
[       NodeManager.txt] 2015-12-04 14:42:36,988 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000007/default_container_executor.sh]
[         MapRedJob.txt] 2015-12-04 14:42:37,000 INFO mapreduce.Job:  map 100% reduce 80%
[       NodeManager.txt] 2015-12-04 14:42:37,074 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:37,075 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 6, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143,
[       NodeManager.txt] 2015-12-04 14:42:37,075 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:37,076 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 7, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:37,080 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000007 Container Transitioned from ACQUIRED to RUNNING
[   ResourceManager.txt] 2015-12-04 14:42:37,081 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000006 Container Transitioned from RUNNING to COMPLETED
[   ResourceManager.txt] 2015-12-04 14:42:37,081 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1449210909990_0001_01_000006 in state: COMPLETED event:FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:37,081 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000006
[   ResourceManager.txt] 2015-12-04 14:42:37,081 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1449210909990_0001_01_000006 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 2 containers, <memory:3072, vCores:2> used and <memory:1024, vCores:6> available, release resources=true
[   ResourceManager.txt] 2015-12-04 14:42:37,082 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application appattempt_1449210909990_0001_000001 released container container_1449210909990_0001_01_000006 on node: host: irobot:11000 #containers=2 available=1024 used=3072 with event: FINISHED
[       NodeManager.txt] 2015-12-04 14:42:38,080 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:38,080 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 7, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:39,027 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1449210909990_0001_01_000007
[       NodeManager.txt] 2015-12-04 14:42:39,028 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1449210909990_0001_01_000006
[       NodeManager.txt] 2015-12-04 14:42:39,064 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 5403 for container-id container_1449210909990_0001_01_000007: 113.0 MB of 1 GB physical memory used; 705.2 MB of 2.1 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:39,083 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:39,084 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 7, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:39,099 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 262.4 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:40,086 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:40,087 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 7, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[          NameNode.txt] 2015-12-04 14:42:40,758 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000004_0/part-r-00004. BP-981411196-192.168.100.200-1447912540337 blk_3855308464909776116_13452{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:40,860 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_3855308464909776116_13452 src: /192.168.100.200:49684 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:40,878 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49684, dest: /192.168.100.200:50010, bytes: 2625, op: HDFS_WRITE, cliID: DFSClient_attempt_1449210909990_0001_r_000004_0_-1930535325_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_3855308464909776116_13452, duration: 16351809
[          DataNode.txt] 2015-12-04 14:42:40,879 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_3855308464909776116_13452, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:40,882 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bruce/out/_temporary/1/_temporary/attempt_1449210909990_0001_r_000004_0/part-r-00004 is closed by DFSClient_attempt_1449210909990_0001_r_000004_0_-1930535325_1
[         MapRedJob.txt] 2015-12-04 14:42:41,000 INFO mapreduce.Job:  map 100% reduce 100%
[         MapRedJob.txt] 2015-12-04 14:42:41,000 INFO mapreduce.Job: Job job_1449210909990_0001 completed successfully
[         MapRedJob.txt] 2015-12-04 14:42:41,000 INFO mapreduce.Job: Counters: 43
[       NodeManager.txt] 2015-12-04 14:42:41,064 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	IP=192.168.100.200	OPERATION=Stop Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000007
[       NodeManager.txt] 2015-12-04 14:42:41,065 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000007 transitioned from RUNNING to KILLING
[       NodeManager.txt] 2015-12-04 14:42:41,065 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1449210909990_0001_01_000007
[       NodeManager.txt] 2015-12-04 14:42:41,065 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:41,065 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 7, }, state: C_RUNNING, diagnostics: "Container killed by the ApplicationMaster.\n", exit_status: -1000,
[          NameNode.txt] 2015-12-04 14:42:41,169 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bruce/out/_SUCCESS is closed by DFSClient_NONMAPREDUCE_-64196293_1
[          DataNode.txt] 2015-12-04 14:42:41,202 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49643, dest: /192.168.100.200:50010, bytes: 61310, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_6129522205581518625_13442, duration: 17556390012
[          DataNode.txt] 2015-12-04 14:42:41,203 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_6129522205581518625_13442, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:41,205 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/bruce/.staging/job_1449210909990_0001_1.jhist is closed by DFSClient_NONMAPREDUCE_-64196293_1
[       NodeManager.txt] 2015-12-04 14:42:41,213 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from task is : 143
[       NodeManager.txt] 2015-12-04 14:42:41,213 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
[       NodeManager.txt] 2015-12-04 14:42:41,214 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000007 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
[       NodeManager.txt] 2015-12-04 14:42:41,215 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000007
[       NodeManager.txt] 2015-12-04 14:42:41,215 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	OPERATION=Container Finished - Killed	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000007
[       NodeManager.txt] 2015-12-04 14:42:41,215 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000007 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
[       NodeManager.txt] 2015-12-04 14:42:41,215 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1449210909990_0001_01_000007 from application application_1449210909990_0001
[          NameNode.txt] 2015-12-04 14:42:41,221 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/history/done_intermediate/bruce/job_1449210909990_0001.summary_tmp. BP-981411196-192.168.100.200-1447912540337 blk_-7104736880875874536_13455{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:41,224 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-7104736880875874536_13455 src: /192.168.100.200:49687 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:41,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49687, dest: /192.168.100.200:50010, bytes: 346, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-7104736880875874536_13455, duration: 2389770
[          DataNode.txt] 2015-12-04 14:42:41,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-7104736880875874536_13455, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:41,231 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/history/done_intermediate/bruce/job_1449210909990_0001.summary_tmp is closed by DFSClient_NONMAPREDUCE_-64196293_1
[          DataNode.txt] 2015-12-04 14:42:41,275 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49688, bytes: 61790, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_6129522205581518625_13442, duration: 369502
[          NameNode.txt] 2015-12-04 14:42:41,278 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/history/done_intermediate/bruce/job_1449210909990_0001-1449211332031-bruce-wordcount-1449211361176-1-5-SUCCEEDED-default.jhist_tmp. BP-981411196-192.168.100.200-1447912540337 blk_8005563592996227050_13457{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:41,281 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_8005563592996227050_13457 src: /192.168.100.200:49689 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:41,286 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49689, dest: /192.168.100.200:50010, bytes: 61310, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_8005563592996227050_13457, duration: 3038932
[          DataNode.txt] 2015-12-04 14:42:41,286 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_8005563592996227050_13457, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:41,288 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/history/done_intermediate/bruce/job_1449210909990_0001-1449211332031-bruce-wordcount-1449211361176-1-5-SUCCEEDED-default.jhist_tmp is closed by DFSClient_NONMAPREDUCE_-64196293_1
[          DataNode.txt] 2015-12-04 14:42:41,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49688, bytes: 71365, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-9059741398368895896_13441, duration: 534650
[          NameNode.txt] 2015-12-04 14:42:41,331 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hadoop-yarn/staging/history/done_intermediate/bruce/job_1449210909990_0001_conf.xml_tmp. BP-981411196-192.168.100.200-1447912540337 blk_-8761021435823419813_13459{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[192.168.100.200:50010|RBW]]}
[          DataNode.txt] 2015-12-04 14:42:41,334 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-981411196-192.168.100.200-1447912540337:blk_-8761021435823419813_13459 src: /192.168.100.200:49690 dest: /192.168.100.200:50010
[          DataNode.txt] 2015-12-04 14:42:41,339 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:49690, dest: /192.168.100.200:50010, bytes: 70809, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-64196293_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-8761021435823419813_13459, duration: 2894684
[          DataNode.txt] 2015-12-04 14:42:41,339 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-981411196-192.168.100.200-1447912540337:blk_-8761021435823419813_13459, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
[          NameNode.txt] 2015-12-04 14:42:41,341 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/history/done_intermediate/bruce/job_1449210909990_0001_conf.xml_tmp is closed by DFSClient_NONMAPREDUCE_-64196293_1
[       NodeManager.txt] 2015-12-04 14:42:42,077 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:42,077 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 7, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143,
[       NodeManager.txt] 2015-12-04 14:42:42,078 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1449210909990_0001_01_000007
[   ResourceManager.txt] 2015-12-04 14:42:42,083 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000007 Container Transitioned from RUNNING to COMPLETED
[   ResourceManager.txt] 2015-12-04 14:42:42,083 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1449210909990_0001_01_000007 in state: COMPLETED event:FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:42,083 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000007
[   ResourceManager.txt] 2015-12-04 14:42:42,083 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1449210909990_0001_01_000007 of capacity <memory:1024, vCores:1> on host irobot:11000, which currently has 1 containers, <memory:2048, vCores:1> used and <memory:2048, vCores:7> available, release resources=true
[   ResourceManager.txt] 2015-12-04 14:42:42,084 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application appattempt_1449210909990_0001_000001 released container container_1449210909990_0001_01_000007 on node: host: irobot:11000 #containers=1 available=2048 used=2048 with event: FINISHED
[       NodeManager.txt] 2015-12-04 14:42:42,099 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1449210909990_0001_01_000007
[       NodeManager.txt] 2015-12-04 14:42:42,141 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 262.9 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
[          NameNode.txt] 2015-12-04 14:42:42,897 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* InvalidateBlocks: ask 192.168.100.200:50010 to delete [blk_6129522205581518625_13442, blk_-9059741398368895896_13441]
[       NodeManager.txt] 2015-12-04 14:42:43,082 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:44,086 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[          DataNode.txt] 2015-12-04 14:42:44,816 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_6129522205581518625_13442 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_6129522205581518625 for deletion
[          DataNode.txt] 2015-12-04 14:42:44,816 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_-9059741398368895896_13441 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-9059741398368895896 for deletion
[          DataNode.txt] 2015-12-04 14:42:44,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_6129522205581518625_13442 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_6129522205581518625
[          DataNode.txt] 2015-12-04 14:42:44,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_-9059741398368895896_13441 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-9059741398368895896
[       NodeManager.txt] 2015-12-04 14:42:45,089 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[       NodeManager.txt] 2015-12-04 14:42:45,181 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4976 for container-id container_1449210909990_0001_01_000001: 262.9 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
[       NodeManager.txt] 2015-12-04 14:42:46,093 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
[   ResourceManager.txt] 2015-12-04 14:42:46,222 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from RUNNING to FINISHING
[   ResourceManager.txt] 2015-12-04 14:42:46,223 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1449210909990_0001 State change from RUNNING to FINISHING
[       NodeManager.txt] 2015-12-04 14:42:46,400 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1449210909990_0001_01_000001 succeeded
[       NodeManager.txt] 2015-12-04 14:42:46,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
[       NodeManager.txt] 2015-12-04 14:42:46,401 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:46,426 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001/container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:46,426 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=bruce	OPERATION=Container Finished - Succeeded	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:46,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1449210909990_0001_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
[       NodeManager.txt] 2015-12-04 14:42:46,427 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1449210909990_0001_01_000001 from application application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:47,097 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1449210909990, }, attemptId: 1, }, id: 1, }, state: C_COMPLETE, diagnostics: "", exit_status: 0,
[       NodeManager.txt] 2015-12-04 14:42:47,097 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1449210909990_0001_01_000001
[   ResourceManager.txt] 2015-12-04 14:42:47,100 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1449210909990_0001_01_000001 Container Transitioned from RUNNING to COMPLETED
[   ResourceManager.txt] 2015-12-04 14:42:47,100 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1449210909990_0001_01_000001 in state: COMPLETED event:FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:47,100 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=AM Released Container	TARGET=SchedulerApp	RESULT=SUCCESS	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000001
[   ResourceManager.txt] 2015-12-04 14:42:47,101 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode: Released container container_1449210909990_0001_01_000001 of capacity <memory:2048, vCores:1> on host irobot:11000, which currently has 0 containers, <memory:0, vCores:0> used and <memory:4096, vCores:8> available, release resources=true
[   ResourceManager.txt] 2015-12-04 14:42:47,101 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Application appattempt_1449210909990_0001_000001 released container container_1449210909990_0001_01_000001 on node: host: irobot:11000 #containers=0 available=4096 used=0 with event: FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:47,103 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1449210909990_0001_000001 State change from FINISHING to FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:47,104 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1449210909990_0001 State change from FINISHING to FINISHED
[   ResourceManager.txt] 2015-12-04 14:42:47,105 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bruce	OPERATION=Application Finished - Succeeded	TARGET=RMAppManager	RESULT=SUCCESS	APPID=application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:47,105 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1449210909990_0001_000001
[   ResourceManager.txt] 2015-12-04 14:42:47,106 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1449210909990_0001 requests cleared
[   ResourceManager.txt] 2015-12-04 14:42:47,108 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing info for app: application_1449210909990_0001
[   ResourceManager.txt] 2015-12-04 14:42:47,110 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1449210909990_0001,name=wordcount,user=bruce,queue=default,state=FINISHED,trackingUrl=192.168.100.200:54315/proxy/application_1449210909990_0001/jobhistory/job/job_1449210909990_0001,appMasterHost=irobot,startTime=1449211332046,finishTime=1449211366222
[       NodeManager.txt] 2015-12-04 14:42:47,117 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Trying to stop unknown container container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:47,117 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=UnknownUser	IP=192.168.100.200	OPERATION=Stop Container Request	TARGET=ContainerManagerImpl	RESULT=FAILURE	DESCRIPTION=Trying to stop unknown container!	APPID=application_1449210909990_0001	CONTAINERID=container_1449210909990_0001_01_000001
[       NodeManager.txt] 2015-12-04 14:42:48,107 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1449210909990_0001 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
[       NodeManager.txt] 2015-12-04 14:42:48,107 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /uloc/hadoopdata/hadoop-bruce/yarn/nmlocal/usercache/bruce/appcache/application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:48,107 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1449210909990_0001
[       NodeManager.txt] 2015-12-04 14:42:48,109 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1449210909990_0001 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
[       NodeManager.txt] 2015-12-04 14:42:48,109 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1449210909990_0001, with delay of 10800 seconds
[       NodeManager.txt] 2015-12-04 14:42:48,181 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1449210909990_0001_01_000001
[          NameNode.txt] 2015-12-04 14:42:48,899 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* InvalidateBlocks: ask 192.168.100.200:50010 to delete [blk_4998228328493697677_13438, blk_3649142917674748753_13432, blk_-3104879886624780072_13430, blk_-1097534736843614414_13436, blk_-2734758558562150374_13434]
[          DataNode.txt] 2015-12-04 14:42:50,816 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_4998228328493697677_13438 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_4998228328493697677 for deletion
[          DataNode.txt] 2015-12-04 14:42:50,816 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_3649142917674748753_13432 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_3649142917674748753 for deletion
[          DataNode.txt] 2015-12-04 14:42:50,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_-3104879886624780072_13430 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-3104879886624780072 for deletion
[          DataNode.txt] 2015-12-04 14:42:50,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_4998228328493697677_13438 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_4998228328493697677
[          DataNode.txt] 2015-12-04 14:42:50,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_-1097534736843614414_13436 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-1097534736843614414 for deletion
[          DataNode.txt] 2015-12-04 14:42:50,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_-2734758558562150374_13434 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-2734758558562150374 for deletion
[          DataNode.txt] 2015-12-04 14:42:50,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_3649142917674748753_13432 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_3649142917674748753
[          DataNode.txt] 2015-12-04 14:42:50,817 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_-3104879886624780072_13430 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-3104879886624780072
[          DataNode.txt] 2015-12-04 14:42:50,818 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_-1097534736843614414_13436 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-1097534736843614414
[          DataNode.txt] 2015-12-04 14:42:50,818 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_-2734758558562150374_13434 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-2734758558562150374
[          DataNode.txt] 2015-12-04 14:44:22,366 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.100.200:50010, dest: /192.168.100.200:49737, bytes: 350, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-801600361_1, offset: 0, srvID: DS-1689463280-192.168.100.200-50010-1447913181338, blockid: BP-981411196-192.168.100.200-1447912540337:blk_-7104736880875874536_13455, duration: 517694
[          NameNode.txt] 2015-12-04 14:44:22,367 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 119 Total time for transactions(ms): 14Number of transactions batched in Syncs: 1 Number of syncs: 65 SyncTimes(ms): 351
[          NameNode.txt] 2015-12-04 14:44:24,915 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* InvalidateBlocks: ask 192.168.100.200:50010 to delete [blk_-7104736880875874536_13455]
[          DataNode.txt] 2015-12-04 14:44:26,820 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_-7104736880875874536_13455 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-7104736880875874536 for deletion
[          DataNode.txt] 2015-12-04 14:44:26,821 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-981411196-192.168.100.200-1447912540337 blk_-7104736880875874536_13455 file /uloc/hadoopdata/hadoop-bruce/dfs/data/current/BP-981411196-192.168.100.200-1447912540337/current/finalized/subdir12/subdir24/blk_-7104736880875874536
[          DataNode.txt] 2015-12-04 14:45:26,557 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: BP-981411196-192.168.100.200-1447912540337:blk_4244323377233503308_13415 is no longer in the dataset



以下参考: http://blog.csdn.net/zhangjun2915/article/details/9336385

job.split和job.splitmetainfo两个文件存储了有关InputSplit的信息。我们知道,Hadoop MapReduce将所有的输入文件划分成一个一个的InputSplit(划分规则由InputFormat的实现类定义),且为每一个InputSplit,JobTracker将分配一个task交给TaskTracker去执行map。那么,在启动Job之前,首先需要完成文件划分,这个实际上是由Client端来执行。Client完成文件划分后,将划分信息写入job.split和job.splitmetainfo,然后写这两个文件到staging dir。


接下来的问题是,为什么需要有两个文件,它们分别存储了什么样的信息?如下图所示,job.split存储了所有划分出来的InputSplit,而每个InputSplit记录如下信息:

  • 该Split的类型(ClassName, mostly org.apache.hadoop.mapreduce.lib.input.FileSplit)
  • 该Split所属文件的路径(FilePath)
  • 该Split在所属文件中的起始位置(FileOffset)
  • 该Split的字节长度(Length)
job.splitmetainfo存储了有关InputSplit的元数据:
  • 该Split在哪些Node上是local data(Location)
  • 该Split对应的InputSplit在job.split文件中的位置(SplitFileOffset)
  • 该Split的字节长度(Length, the same as that in job.split)


  • job.splitmetainfo提供给JobTracker读取。比如,根据# Split,JobTracker能知道该分配多少个Task;根据Location,JobTracker能决定将该Split对应的Task分配给哪个Node去执行(优先分配到拥有该Split local data的Node)
  • job.split提供给TaskTracker读取。根据FilePath, FileOffset, Length,TaskTracker知道从哪个文件的哪个位置开始读取要处理的Split data。



阅读更多

没有更多推荐了,返回首页