64位windows安装hadoop没必要倒腾Cygwin,直接解压官网下载hadoop安装包到本地->最小化配置4个基本文件->执行1条启动命令->完事。一个前提是你的电脑上已经安装了jdk,设置了java环境变量。下面把这几步细化贴出来,以hadoop2.7.2为例
1、下载hadoop安装包就不细说了:http://hadoop.apache.org/->左边点Releases->点mirror site->点http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common->下载hadoop-2.7.2.tar.gz;
2、解压也不细说了:复制到D盘根目录直接解压,出来一个目录D:\hadoop-2.7.2,配置到环境变量HADOOP_HOME中,在PATH里加上%HADOOP_HOME%\bin;点击http://download.csdn.net/detail/wuxun1997/9841472下载相关工具类,直接解压后把文件丢到D:\hadoop-2.7.2\bin目录中去,将其中的hadoop.dll在c:/windows/System32下也丢一份;
3、去D:\hadoop-2.7.2\etc\hadoop找到下面4个文件并按如下最小配置粘贴上去:
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/hadoop/data/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/hadoop/data/dfs/datanode</value>
</property>
</configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
4、启动windows命令行窗口,进入hadoop-2.7.2\bin目录,执行下面2条命令,先格式化namenode再启动hadoop
D:\hadoop-2.7.2\bin>hadoop namenode -format (有可能要按ctrl+c退出) DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 17/05/13 07:16:40 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = wulinfeng/192.168.8.5 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.2 STARTUP_MSG: classpath = D:\hadoop-2.7.2\etc\hadoop;D:\hadoop-2.7.2\share\hado 。。。。D:\hadoop-2.7.2\share\hado op\mapreduce\hadoop-mapreduce-examples-2.7.2.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b16 5c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08 Z STARTUP_MSG: java = 1.8.0_101 ************************************************************/ 17/05/13 07:16:40 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf 17/05/13 07:16:42 INFO namenode.FSNamesystem: No KeyProvider found. 17/05/13 07:16:42 INFO namenode.FSNamesystem: fsLock is fair:true 17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim it=1000 17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re gistration.ip-hostname-check=true 17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay. block.deletion.sec is set to 000:00:00:00.000 17/05/13 07:16:42 INFO blockmanagement.BlockManager: The block deletion will sta rt around 2017 五月 13 07:16:42 17/05/13 07:16:42 INFO util.GSet: Computing capacity for map BlocksMap 17/05/13 07:16:42 INFO util.GSet: VM type = 64-bit 17/05/13 07:16:42 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 17/05/13 07:16:42 INFO util.GSet: capacity = 2^21 = 2097152 entries 。。。。。。。 17/05/13 07:16:43 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 17/05/13 07:16:43 INFO namenode.NameNode: Caching file names occuring more than 10 times 17/05/13 07:16:43 INFO util.GSet: Computing capacity for map cachedBlocks 17/05/13 07:16:43 INFO util.GSet: VM type = 64-bit 17/05/13 07:16:43 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 17/05/13 07:16:43 INFO util.GSet: capacity = 2^18 = 262144 entries 17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc t = 0.9990000128746033 17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode s = 0 17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n um.buckets = 10 17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user s = 10 17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows. minutes = 1,5,25 17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 17/05/13 07:16:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache 17/05/13 07:16:43 INFO util.GSet: VM type = 64-bit 17/05/13 07:16:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273. 1 KB 17/05/13 07:16:43 INFO util.GSet: capacity = 2^15 = 32768 entries 17/05/13 07:16:43 INFO namenode.FSImage: Allocated new BlockPoolId: BP-664414510 -192.168.8.5-1494631003212 17/05/13 07:16:43 INFO common.Storage: Storage directory \hadoop\data\dfs\nameno de has been successfully formatted. 17/05/13 07:16:43 INFO namenode.NNStorageRetentionManager: Going to retain 1 ima ges with txid >= 0 17/05/13 07:16:43 INFO util.ExitUtil: Exiting with status 0 17/05/13 07:16:43 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at wulinfeng/192.168.8.5 ************************************************************/ D:\hadoop-2.7.2\bin>cd ..\sbin D:\hadoop-2.7.2\sbin>start-all.cmd This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd starting yarn daemons D:\hadoop-2.7.2\sbin>jps 4944 DataNode 5860 NodeManager 3532 Jps 7852 NameNode 7932 ResourceManager D:\hadoop-2.7.2\sbin>
通过jps命令可以看到4个进程都拉起来了,到这里hadoop的安装启动已经完事了。接着我们可以用浏览器到localhost:8088看mapreduce任务,到localhost:50070->Utilites->Browse the file system看hdfs文件。如果重启hadoop无需再格式化namenode,只要stop-all.cmd再start-all.cmd就可以了。
上面拉起4个进程时会弹出4个窗口,我们可以看看这4个进程启动时都干了啥:
DataNode
************************************************************/ 17/05/13 07:18:24 INFO impl.MetricsConfig: loaded properties from hadoop-metrics 2.properties 17/05/13 07:18:25 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s econd(s). 17/05/13 07:18:25 INFO impl.MetricsSystemImpl: DataNode metrics system started 17/05/13 07:18:25 INFO datanode.BlockScanner: Initialized block scanner with tar getBytesPerSec 1048576 17/05/13 07:18:25 INFO datanode.DataNode: Configured hostname is wulinfeng 17/05/13 07:18:25 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0 17/05/13 07:18:25 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:50 010 17/05/13 07:18:25 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s 17/05/13 07:18:25 INFO datanode.DataNode: Number threads for balancing is 5 17/05/13 07:18:25 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter (org.mortbay.log) via org.mortbay.log.Slf4jLog ........ 17/05/13 07:18:26 INFO http.HttpServer2: Jetty bound to port 53058 17/05/13 07:18:26 INFO mortbay.log: jetty-6.1.26 17/05/13 07:18:29 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi thSafeStartup@localhost:53058 17/05/13 07:18:41 INFO web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0. 0:50075 17/05/13 07:18:42 INFO datanode.DataNode: dnUserName = Administrator 17/05/13 07:18:42 INFO datanode.DataNode: supergroup = supergroup 17/05/13 07:18:42 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:42 INFO ipc.Server: Starting Socket Reader #1 for port 50020 17/05/13 07:18:42 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:50020 17/05/13 07:18:42 INFO datanode.DataNode: Refresh request received for nameservi ces: null 17/05/13 07:18:42 INFO datanode.DataNode: Starting BPOfferServices for nameservi ces: <default> 17/05/13 07:18:42 INFO ipc.Server: IPC Server listener on 50020: starting 17/05/13 07:18:42 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:42 INFO datanode.DataNode: Block pool <registering> (Datanode Uui d unassigned) service to localhost/127.0.0.1:9000 starting to offer service 17/05/13 07:18:43 INFO common.Storage: Lock on \hadoop\data\dfs\datanode\in_use. lock acquired by nodename 4944@wulinfeng 17/05/13 07:18:43 INFO common.Storage: Storage directory \hadoop\data\dfs\datano de is not formatted for BP-664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Formatting ... 17/05/13 07:18:43 INFO common.Storage: Analyzing storage directories for bpid BP -664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Locking is disabled for \hadoop\data\dfs\ datanode\current\BP-664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Block pool storage directory \hadoop\data \dfs\datanode\current\BP-664414510-192.168.8.5-1494631003212 is not formatted fo r BP-664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Formatting ... 17/05/13 07:18:43 INFO common.Storage: Formatting block pool BP-664414510-192.16 8.8.5-1494631003212 directory \hadoop\data\dfs\datanode\current\BP-664414510-192 .168.8.5-1494631003212\current 17/05/13 07:18:43 INFO datanode.DataNode: Setting up storage: nsid=61861794;bpid =BP-664414510-192.168.8.5-1494631003212;lv=-56;nsInfo=lv=-63;cid=CID-1284c5d0-59 2a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0;bpid=BP-664414510-192.168.8.5-149463 1003212;dnuuid=null 17/05/13 07:18:43 INFO datanode.DataNode: Generated and persisted new Datanode U UID e6e53ca9-b788-4c1c-9308-29b31be28705 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Added new volume: DS-f2b82635-0df9-48 4f-9d12-4364a9279b20 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Added volume - \hadoop\data\dfs\datan ode\current, StorageType: DISK 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding block pool BP-664414510-192.16 8.8.5-1494631003212 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Scanning block pool BP-664414510-192. 168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datanode\current... 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-6644 14510-192.168.8.5-1494631003212 on D:\hadoop\data\dfs\datanode\current: 15ms 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to scan all replicas for b lock pool BP-664414510-192.168.8.5-1494631003212: 20ms 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-664414510-192.168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datanode\cu rrent... 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-664414510-192.168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datano de\current: 0ms 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to add all replicas to map : 17ms 17/05/13 07:18:44 INFO datanode.DirectoryScanner: Periodic Directory Tree Verifi cation scan starting at 1494650306107 with interval 21600000 17/05/13 07:18:44 INFO datanode.VolumeScanner: Now scanning bpid BP-664414510-19 2.168.8.5-1494631003212 on volume \hadoop\data\dfs\datanode 17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(\hadoop\data\dfs\da tanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): finished scanning block pool B P-664414510-192.168.8.5-1494631003212 17/05/13 07:18:44 INFO datanode.DataNode: Block pool BP-664414510-192.168.8.5-14 94631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000 beginning h andshake with NN 17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(\hadoop\data\dfs\da tanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): no suitable block pools found to scan. Waiting 1814399766 ms. 17/05/13 07:18:44 INFO datanode.DataNode: Block pool Block pool BP-664414510-192 .168.8.5-1494631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000 successfully registered with NN 17/05/13 07:18:44 INFO datanode.DataNode: For namenode localhost/127.0.0.1:9000 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000 17/05/13 07:18:44 INFO datanode.DataNode: Namenode Block pool BP-664414510-192.1 68.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308-29b31be28705) servic e to localhost/127.0.0.1:9000 trying to claim ACTIVE state with txid=1 17/05/13 07:18:44 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block po ol BP-664414510-192.168.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308 -29b31be28705) service to localhost/127.0.0.1:9000 17/05/13 07:18:44 INFO datanode.DataNode: Successfully sent block report 0x20e81 034dafa, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 5 msec to generate and 91 msecs for RP C and NN processing. Got back one command: FinalizeCommand/5. 17/05/13 07:18:44 INFO datanode.DataNode: Got finalize command for block pool BP -664414510-192.168.8.5-1494631003212
NameNode
************************************************************/ 17/05/13 07:18:24 INFO namenode.NameNode: createNameNode [] 17/05/13 07:18:26 INFO impl.MetricsConfig: loaded properties from hadoop-metrics 2.properties 17/05/13 07:18:26 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s econd(s). 17/05/13 07:18:26 INFO impl.MetricsSystemImpl: NameNode metrics system started 17/05/13 07:18:26 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost:9000 17/05/13 07:18:26 INFO namenode.NameNode: Clients are to use localhost:9000 to a ccess this namenode/service. 17/05/13 07:18:28 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0 .0.0:50070 17/05/13 07:18:28 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter (org.mortbay.log) via org.mortbay.log.Slf4jLog 17/05/13 07:18:28 INFO server.AuthenticationFilter: Unable to initialize FileSig nerSecretProvider, falling back to use random secrets. ....... 17/05/13 07:18:28 INFO http.HttpServer2: addJerseyResourcePackage: packageName=o rg.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.r esources, pathSpec=/webhdfs/v1/* 17/05/13 07:18:28 INFO http.HttpServer2: Jetty bound to port 50070 17/05/13 07:18:28 INFO mortbay.log: jetty-6.1.26 17/05/13 07:18:31 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi thSafeStartup@0.0.0.0:50070 17/05/13 07:18:31 WARN namenode.FSNamesystem: Only one image storage directory ( dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories! 17/05/13 07:18:31 WARN namenode.FSNamesystem: Only one namespace edits storage d irectory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories! 17/05/13 07:18:31 INFO namenode.FSNamesystem: No KeyProvider found. 17/05/13 07:18:31 INFO namenode.FSNamesystem: fsLock is fair:true 17/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim it=1000 17/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re gistration.ip-hostname-check=true 17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay. block.deletion.sec is set to 000:00:00:00.000 17/05/13 07:18:31 INFO blockmanagement.BlockManager: The block deletion will sta rt around 2017 五月 13 07:18:31 17/05/13 07:18:31 INFO util.GSet: Computing capacity for map BlocksMap 17/05/13 07:18:31 INFO util.GSet: VM type = 64-bit 17/05/13 07:18:31 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 17/05/13 07:18:31 INFO util.GSet: capacity = 2^21 = 2097152 entries 17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.block.access.token.enab le=false 17/05/13 07:18:31 INFO blockmanagement.BlockManager: defaultReplication = 1 17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplication = 512 17/05/13 07:18:31 INFO blockmanagement.BlockManager: minReplication = 1 17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 17/05/13 07:18:31 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 17/05/13 07:18:31 INFO blockmanagement.BlockManager: encryptDataTransfer = false 17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 17/05/13 07:18:31 INFO namenode.FSNamesystem: fsOwner = Administrato r (auth:SIMPLE) 17/05/13 07:18:31 INFO namenode.FSNamesystem: supergroup = supergroup 17/05/13 07:18:31 INFO namenode.FSNamesystem: isPermissionEnabled = true 17/05/13 07:18:31 INFO namenode.FSNamesystem: HA Enabled: false 17/05/13 07:18:31 INFO namenode.FSNamesystem: Append Enabled: true 17/05/13 07:18:32 INFO util.GSet: Computing capacity for map INodeMap 17/05/13 07:18:32 INFO util.GSet: VM type = 64-bit 17/05/13 07:18:32 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 17/05/13 07:18:32 INFO util.GSet: capacity = 2^20 = 1048576 entries 17/05/13 07:18:32 INFO namenode.FSDirectory: ACLs enabled? false 17/05/13 07:18:32 INFO namenode.FSDirectory: XAttrs enabled? true 17/05/13 07:18:32 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 17/05/13 07:18:32 INFO namenode.NameNode: Caching file names occuring more than 10 times 17/05/13 07:18:32 INFO util.GSet: Computing capacity for map cachedBlocks 17/05/13 07:18:32 INFO util.GSet: VM type = 64-bit 17/05/13 07:18:32 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 17/05/13 07:18:32 INFO util.GSet: capacity = 2^18 = 262144 entries 17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc t = 0.9990000128746033 17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode s = 0 17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n um.buckets = 10 17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user s = 10 17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows. minutes = 1,5,25 17/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 17/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 17/05/13 07:18:33 INFO util.GSet: Computing capacity for map NameNodeRetryCache 17/05/13 07:18:33 INFO util.GSet: VM type = 64-bit 17/05/13 07:18:33 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273. 1 KB 17/05/13 07:18:33 INFO util.GSet: capacity = 2^15 = 32768 entries 17/05/13 07:18:33 INFO common.Storage: Lock on \hadoop\data\dfs\namenode\in_use. lock acquired by nodename 7852@wulinfeng 17/05/13 07:18:34 INFO namenode.FileJournalManager: Recovering unfinalized segme nts in \hadoop\data\dfs\namenode\current 17/05/13 07:18:34 INFO namenode.FSImage: No edit log streams selected. 17/05/13 07:18:34 INFO namenode.FSImageFormatPBINode: Loading 1 INodes. 17/05/13 07:18:34 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 secon ds. 17/05/13 07:18:34 INFO namenode.FSImage: Loaded image for txid 0 from \hadoop\da ta\dfs\namenode\current\fsimage_0000000000000000000 17/05/13 07:18:34 INFO namenode.FSNamesystem: Need to save fs image? false (stal eImage=false, haEnabled=false, isRollingUpgrade=false) 17/05/13 07:18:34 INFO namenode.FSEditLog: Starting log segment at 1 17/05/13 07:18:34 INFO namenode.NameCache: initialized with 0 entries 0 lookups 17/05/13 07:18:35 INFO namenode.FSNamesystem: Finished loading FSImage in 1331 m secs 17/05/13 07:18:36 INFO namenode.NameNode: RPC server is binding to localhost:900 0 17/05/13 07:18:36 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:36 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean 17/05/13 07:18:36 INFO ipc.Server: Starting Socket Reader #1 for port 9000 17/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under constructio n: 0 17/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under constructio n: 0 17/05/13 07:18:36 INFO namenode.FSNamesystem: initializing replication queues 17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Leaving safe mode after 5 secs 17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 17/05/13 07:18:36 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 bloc ks 17/05/13 07:18:36 INFO blockmanagement.DatanodeDescriptor: Number of failed stor age changes from 0 to 0 17/05/13 07:18:37 INFO blockmanagement.BlockManager: Total number of blocks = 0 17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of invalid blocks = 0 17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0 17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0 17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of blocks being writ ten = 0 17/05/13 07:18:37 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 98 msec 17/05/13 07:18:37 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0. 1:9000 17/05/13 07:18:37 INFO namenode.FSNamesystem: Starting services required for act ive state 17/05/13 07:18:37 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:37 INFO ipc.Server: IPC Server listener on 9000: starting 17/05/13 07:18:37 INFO blockmanagement.CacheReplicationMonitor: Starting CacheRe plicationMonitor with interval 30000 milliseconds 17/05/13 07:18:44 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeR egistration(127.0.0.1:50010, datanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284 c5d0-592a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0) storage e6e53ca9-b788-4c1c-9 308-29b31be28705 17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed stor age changes from 0 to 0 17/05/13 07:18:44 INFO net.NetworkTopology: Adding a new node: /default-rack/127 .0.0.1:50010 17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed stor age changes from 0 to 0 17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-f2b82635-0df9-484f-9d12-4364a9279b20 for DN 127.0.0.1:50010 17/05/13 07:18:44 INFO BlockStateChange: BLOCK* processReport: from storage DS-f 2b82635-0df9-484f-9d12-4364a9279b20 node DatanodeRegistration(127.0.0.1:50010, d atanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705, infoPort=50075, infoSecurePort =0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284c5d0-592a-4a41-b185-e53fb57dcf bf;nsid=61861794;c=0), blocks: 0, hasStaleStorage: false, processing time: 2 mse cs
NodeManager
************************************************************/ 17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /node/* 17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /ws/* 17/05/13 07:18:46 INFO webapp.WebApps: Registered webapp guice modules 17/05/13 07:18:46 INFO http.HttpServer2: Jetty bound to port 8042 17/05/13 07:18:46 INFO mortbay.log: jetty-6.1.26 17/05/13 07:18:46 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hado op/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/node to C:\Users\ADMINI~1\AppData\ Local\Temp\Jetty_0_0_0_0_8042_node____19tj0x\webapp 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a pro vider class 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextRe solver as a provider class 五月 13, 2017 7:18:47 上午 com.sun.jersey.server.impl.application.WebApplication Impl _initiate 信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolv er to GuiceManagedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceMana gedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:48 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 17/05/13 07:18:48 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi thSafeStartup@0.0.0.0:8042 17/05/13 07:18:48 INFO webapp.WebApps: Web app node started at 8042 17/05/13 07:18:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0 :8031 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM conta iner statuses: [] 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registering with RM us ing containers :[] 17/05/13 07:18:49 INFO security.NMContainerTokenSecretManager: Rolling master-ke y for container-tokens, got key with id -610858047 17/05/13 07:18:49 INFO security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id 2017302061 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registered with Resour ceManager as wulinfeng:53137 with total resource of <memory:8192, vCores:8> 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Notifying ContainerMan ager to unblock new container-requests
ResourceManager
************************************************************/ 17/05/13 07:18:19 INFO conf.Configuration: found resource core-site.xml at file: /D:/hadoop-2.7.2/etc/hadoop/core-site.xml 17/05/13 07:18:20 INFO security.Groups: clearing userToGroupsMap cache 17/05/13 07:18:21 INFO conf.Configuration: found resource yarn-site.xml at file: /D:/hadoop-2.7.2/etc/hadoop/yarn-site.xml 17/05/13 07:18:21 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn. server.resourcemanager.ResourceManager$RMFatalEventDispatcher ........ 17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /cluster/* 17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /ws/* 17/05/13 07:18:35 INFO webapp.WebApps: Registered webapp guice modules 17/05/13 07:18:35 INFO http.HttpServer2: Jetty bound to port 8088 17/05/13 07:18:35 INFO mortbay.log: jetty-6.1.26 17/05/13 07:18:35 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hado op/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/cluster to C:\Users\ADMINI~1\AppDa ta\Local\Temp\Jetty_0_0_0_0_8088_cluster____u0rgz3\webapp 17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBConte xtResolver as a provider class 五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServ ices as a root resource class 五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a pro vider class 五月 13, 2017 7:18:36 上午 com.sun.jersey.server.impl.application.WebApplication Impl _initiate 信息: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' 五月 13, 2017 7:18:37 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextRe solver to GuiceManagedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:38 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceMana gedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:40 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 17/05/13 07:18:41 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi thSafeStartup@0.0.0.0:8088 17/05/13 07:18:41 INFO webapp.WebApps: Web app cluster started at 8088 17/05/13 07:18:41 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:41 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had oop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server 17/05/13 07:18:41 INFO ipc.Server: IPC Server listener on 8033: starting 17/05/13 07:18:41 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:41 INFO ipc.Server: Starting Socket Reader #1 for port 8033 17/05/13 07:18:49 INFO util.RackResolver: Resolved wulinfeng to /default-rack 17/05/13 07:18:49 INFO resourcemanager.ResourceTrackerService: NodeManager from node wulinfeng(cmPort: 53137 httpPort: 8042) registered with capability: <memory :8192, vCores:8>, assigned nodeId wulinfeng:53137 17/05/13 07:18:49 INFO rmnode.RMNodeImpl: wulinfeng:53137 Node Transitioned from NEW to RUNNING 17/05/13 07:18:49 INFO capacity.CapacityScheduler: Added node wulinfeng:53137 cl usterResource: <memory:8192, vCores:8> 17/05/13 07:28:30 INFO scheduler.AbstractYarnScheduler: Release request cache is cleaned up