hadoop3.1.1全分布式配置求助:hadoop结点所有数据为0,不是hosts问题。

hadoop3.1.1
求助大神看一下配置文件和日志文件,日志自己看不懂。
配置了4个结点,node01在这里插入图片描述
,node02
在这里插入图片描述
,node03
在这里插入图片描述,node04.在这里插入图片描述
下图全部都为0,存储为0,活结点为0,,怎么解决啊啊啊啊啊?????????????

在这里插入图片描述
在这里插入图片描述在这里插入图片描述
hdfs-site.xml
在这里插入图片描述
core.site.xml
在这里插入图片描述
hadoop-env.sh
在这里插入图片描述
日志文件
2019-03-19 19:57:20,933 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM
, HUP, INT]2019-03-19 19:57:20,940 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2019-03-19 19:57:21,692 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.prop
erties2019-03-19 19:57:22,009 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10
second(s).2019-03-19 19:57:22,009 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2019-03-19 19:57:22,078 INFO org.apache.hadoop.hdfs.server.namenode.NameNodeUtils: fs.defaultFS is hdfs://192.168.120.1
1:98202019-03-19 19:57:22,079 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients should use 192.168.120.11:9820 to
access this namenode/service.2019-03-19 19:57:22,250 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your pla
tform… using builtin-java classes where applicable2019-03-19 19:57:22,772 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2019-03-19 19:57:22,871 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
2019-03-19 19:57:22,931 INFO org.eclipse.jetty.util.log: Logging initialized @3136ms
2019-03-19 19:57:23,335 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initializ
e FileSignerSecretProvider, falling back to use random secrets.2019-03-19 19:57:23,390 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not
defined2019-03-19 19:57:23,417 INFO org.apache.hadoop.http.HttpServer2: Added global filter ‘safety’ (class=org.apache.hadoop.
http.HttpServer2 Q u o t i n g I n p u t F i l t e r ) 2019 − 03 − 1919 : 57 : 23 , 430 I N F O o r g . a p a c h e . h a d o o p . h t t p . H t t p S e r v e r 2 : A d d e d f i l t e r s t a t i c u s e r f i l t e r ( c l a s s = o r g . a p a c h e . h a d o o p . h t t p . l i b . S t a t i c U s e r W e b F i l t e r QuotingInputFilter)2019-03-19 19:57:23,430 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hado op.http.lib.StaticUserWebFilter QuotingInputFilter)2019031919:57:23,430INFOorg.apache.hadoop.http.HttpServer2:Addedfilterstaticuserfilter(class=org.apache.hadoop.http.lib.StaticUserWebFilterStaticUserFilter) to context hdfs2019-03-19 19:57:23,430 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hado
op.http.lib.StaticUserWebFilter S t a t i c U s e r F i l t e r ) t o c o n t e x t s t a t i c 2019 − 03 − 1919 : 57 : 23 , 430 I N F O o r g . a p a c h e . h a d o o p . h t t p . H t t p S e r v e r 2 : A d d e d f i l t e r s t a t i c u s e r f i l t e r ( c l a s s = o r g . a p a c h e . h a d o o p . h t t p . l i b . S t a t i c U s e r W e b F i l t e r StaticUserFilter) to context static2019-03-19 19:57:23,430 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hado op.http.lib.StaticUserWebFilter StaticUserFilter)tocontextstatic2019031919:57:23,430INFOorg.apache.hadoop.http.HttpServer2:Addedfilterstaticuserfilter(class=org.apache.hadoop.http.lib.StaticUserWebFilterStaticUserFilter) to context logs2019-03-19 19:57:23,533 INFO org.apache.hadoop.http.HttpServer2: Added filter ‘org.apache.hadoop.hdfs.web.AuthFilter’ (
class=org.apache.hadoop.hdfs.web.AuthFilter)2019-03-19 19:57:23,534 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoo
p.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/2019-03-19 19:57:23,555 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 9870
2019-03-19 19:57:23,558 INFO org.eclipse.jetty.server.Server: jetty-9.3.19.v20170502
2019-03-19 19:57:23,711 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@c43
0e6c{/logs,file:///opt/sxt/hadoop-3.1.1/logs/,AVAILABLE}2019-03-19 19:57:23,717 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@536
f2a7e{/static,file:///opt/sxt/hadoop-3.1.1/share/hadoop/hdfs/webapps/static/,AVAILABLE}2019-03-19 19:57:24,077 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@452e19ca{/,
file:///opt/sxt/hadoop-3.1.1/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}2019-03-19 19:57:24,113 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@422faf1e{HTTP/1.1,[htt
p/1.1]}{0.0.0.0:9870}2019-03-19 19:57:24,114 INFO org.eclipse.jetty.server.Server: Started @4325ms
2019-03-19 19:57:25,874 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs
.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!2019-03-19 19:57:25,874 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage dire
ctory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!2019-03-19 19:57:26,310 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true
2019-03-19 19:57:26,417 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null
2019-03-19 19:57:26,428 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2019-03-19 19:57:26,446 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabl
ed: false2019-03-19 19:57:26,487 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = root (auth:SIMP
LE)2019-03-19 19:57:26,487 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2019-03-19 19:57:26,487 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2019-03-19 19:57:26,488 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2019-03-19 19:57:26,769 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percenta
ge set to 0. Disabling file IO profiling2019-03-19 19:57:26,852 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit:
configured=1000, counted=60, effected=10002019-03-19 19:57:26,873 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.regis
tration.ip-hostname-check=true2019-03-19 19:57:26,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.blo
ck.deletion.sec is set to 000:00:00:00.0002019-03-19 19:57:26,914 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start
around 2019 Mar 19 19:57:262019-03-19 19:57:26,922 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2019-03-19 19:57:26,922 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-03-19 19:57:26,939 INFO org.apache.hadoop.util.GSet: 2.0% max memory 237.8 MB = 4.8 MB
2019-03-19 19:57:26,939 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries
2019-03-19 19:57:26,991 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable
= false2019-03-19 19:57:27,063 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.namenode.safemode.extens
ion(30000) assuming MILLISECONDS2019-03-19 19:57:27,063 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.
threshold-pct = 0.99900001287460332019-03-19 19:57:27,063 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.
min.datanodes = 02019-03-19 19:57:27,063 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.
extension = 300002019-03-19 19:57:27,063 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 2
2019-03-19 19:57:27,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 5
122019-03-19 19:57:27,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2019-03-19 19:57:27,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2019-03-19 19:57:27,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval = 3
000ms2019-03-19 19:57:27,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = f
alse2019-03-19 19:57:27,064 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1
0002019-03-19 19:57:27,395 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2019-03-19 19:57:27,395 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-03-19 19:57:27,403 INFO org.apache.hadoop.util.GSet: 1.0% max memory 237.8 MB = 2.4 MB
2019-03-19 19:57:27,404 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2019-03-19 19:57:27,405 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2019-03-19 19:57:27,405 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true
2019-03-19 19:57:27,407 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2019-03-19 19:57:27,407 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10
times2019-03-19 19:57:27,491 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpen
Files: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 655362019-03-19 19:57:27,522 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: SkipList is disabled
2019-03-19 19:57:27,569 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2019-03-19 19:57:27,569 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-03-19 19:57:27,577 INFO org.apache.hadoop.util.GSet: 0.25% max memory 237.8 MB = 608.8 KB
2019-03-19 19:57:27,577 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries
2019-03-19 19:57:27,671 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.to
p.window.num.buckets = 102019-03-19 19:57:27,672 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.to
p.num.users = 102019-03-19 19:57:27,672 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.to
p.windows.minutes = 1,5,252019-03-19 19:57:27,704 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2019-03-19 19:57:27,704 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total he
ap and retry cache entry expiry time is 600000 millis2019-03-19 19:57:27,729 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2019-03-19 19:57:27,730 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2019-03-19 19:57:27,734 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 237.8 MB = 73.1 KB
2019-03-19 19:57:27,734 INFO org.apache.hadoop.util.GSet: capacity = 2^13 = 8192 entries
2019-03-19 19:57:27,850 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /var/sxt/hadoop/full/dfs/name/in_use
.lock acquired by nodename 3268@node012019-03-19 19:57:28,057 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments
in /var/sxt/hadoop/full/dfs/name/current2019-03-19 19:57:28,057 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2019-03-19 19:57:28,058 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/
var/sxt/hadoop/full/dfs/name/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)2019-03-19 19:57:28,689 INFO org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the erasure codi
ng policy RS-6-3-1024k2019-03-19 19:57:28,702 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2019-03-19 19:57:28,940 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2019-03-19 19:57:28,940 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /var/sxt/hado
op/full/dfs/name/current/fsimage_00000000000000000002019-03-19 19:57:28,991 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleIm
age=false, haEnabled=false, isRollingUpgrade=false)2019-03-19 19:57:28,996 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2019-03-19 19:57:30,603 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2019-03-19 19:57:30,603 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 2841 msec
s2019-03-19 19:57:32,797 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to node01:9820
2019-03-19 19:57:32,917 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.Linked
BlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler2019-03-19 19:57:32,971 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9820
2019-03-19 19:57:33,937 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState, Replica
tedBlocksState and ECBlockGroupsState MBeans.2019-03-19 19:57:33,995 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction:
02019-03-19 19:57:34,058 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: initializing replication queue
s2019-03-19 19:57:34,061 INFO org.apache.hadoop.hdfs.StateChange: STATE
Leaving safe mode after 0 secs
2019-03-19 19:57:34,062 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2019-03-19 19:57:34,062 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2019-03-19 19:57:34,141 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks
= 02019-03-19 19:57:34,150 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks
= 02019-03-19 19:57:34,150 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blo
cks = 02019-03-19 19:57:34,150 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blo
cks = 02019-03-19 19:57:34,150 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written
= 02019-03-19 19:57:34,150 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for inval
id, over- and under-replicated blocks completed in 86 msec2019-03-19 19:57:34,382 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-03-19 19:57:34,393 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: node01/192.168.120.11
:98202019-03-19 19:57:34,395 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9820: starting
2019-03-19 19:57:34,488 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active
state2019-03-19 19:57:34,488 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Initializing quota with 4 thread(s)
2019-03-19 19:57:34,525 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Quota initialization completed in 36 m
illisecondsname space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
2019-03-19 19:57:34,583 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheRepli
cationMonitor with interval 30000 milliseconds

  • 5
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值