修改hadoop配置文件kylin访问异常
1.修改hadoop配置文件重启hadoop服务
2.重启完hadoop,需要重启hbase,个人经验要重启两次服务蔡靠谱,一次可能重启不成功。
3.还要再挨个节点重启kylin。Directory /usr/local/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
执行那namenode format格式化:hdfs namenode -format每次增量构建需要重启,否则找不到找不到配置文件
修改工作节点kylin.properties文件kylin.server.mode=job
kylin.server.cluster-servers=99.48.232.210:7070,99.48.232.211:7070,99.48.232.213:7070Caused by: org.apache.hadoop.hbase.TableExistsException: kylin_metadata
这个错误是由于kylin已经运行过,元数据还保留着hbase中需要清理
步骤如下:- hbase zkcli登录zk的客户端
- ls /hbase/tablehbase的元信息在目录
- rmr /hbase/table 删除元数据表
ACCEPTED: waiting for AM container to be allocated, launched and register with RM.
这个原因主要是datanode unhealthy 在web界面可以看到。
引起这个问题的原因为yarn-site.xml
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/tmp/hadoop/nodemanager</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/tmp/hadoop/nodemanager/logs</value>
</property>
这两个配置路径磁盘空间不足,可以添加多磁盘,或者扩盘,或者清空这两个目录。
- File /kylin/kylin_metadata/kylin-f02df97d-cff7-4faf-8dc6-325ca5a738c5/ kylin_intermediate_kylin_error_log_cube_fde7b1bc_6249_47c1_a680_eb07e84772aa/_temporary/1/_temporary/ attempt_1532486903195_0015_m_000006_1/part-m-00006 could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
这个问题在我的实验环境中也是由yarn-site.xml
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/tmp/hadoop/nodemanager</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/tmp/hadoop/nodemanager/logs</value>
</property>
这两个配置路径磁盘空间不足,可以添加多磁盘,或者扩盘,或者清空这两个目录。
- org.apache.hadoop.mapreduce.task.reduce.ShuffleShuffleError: error in shuffle in fetcher
Could not find any valid local directory for output/attempt_1532479393376_0050_r_000000_0/map_0.out
描述:
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle
ShuffleError: error in shuffle in fetcher Could not find any valid local directory for output/attempt_1532479393376_0050_r_000000_0/map_0.out 描述: Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle
ShuffleError: error in shuffle in fetcher#3 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at org.apache.hadoop.mapred.YarnChild
2.run(YarnChild.java:164)atjava.security.AccessController.doPrivileged(NativeMethod)atjavax.security.auth.Subject.doAs(Subject.java:422)atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)atorg.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)Causedby:org.apache.hadoop.util.DiskChecker
2.
r
u
n
(
Y
a
r
n
C
h
i
l
d
.
j
a
v
a
:
164
)
a
t
j
a
v
a
.
s
e
c
u
r
i
t
y
.
A
c
c
e
s
s
C
o
n
t
r
o
l
l
e
r
.
d
o
P
r
i
v
i
l
e
g
e
d
(
N
a
t
i
v
e
M
e
t
h
o
d
)
a
t
j
a
v
a
x
.
s
e
c
u
r
i
t
y
.
a
u
t
h
.
S
u
b
j
e
c
t
.
d
o
A
s
(
S
u
b
j
e
c
t
.
j
a
v
a
:
422
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
s
e
c
u
r
i
t
y
.
U
s
e
r
G
r
o
u
p
I
n
f
o
r
m
a
t
i
o
n
.
d
o
A
s
(
U
s
e
r
G
r
o
u
p
I
n
f
o
r
m
a
t
i
o
n
.
j
a
v
a
:
1657
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
m
a
p
r
e
d
.
Y
a
r
n
C
h
i
l
d
.
m
a
i
n
(
Y
a
r
n
C
h
i
l
d
.
j
a
v
a
:
158
)
C
a
u
s
e
d
b
y
:
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
u
t
i
l
.
D
i
s
k
C
h
e
c
k
e
r
DiskErrorException: Could not find any valid local directory for output/attempt_1532479393376_0050_r_000000_0/map_0.out at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131) at org.apache.hadoop.mapred.YarnOutputFiles.getInputFileForWrite(YarnOutputFiles.java:213) at org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput.(OnDiskMapOutput.java:65) at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:265) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:514) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:336) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
原因:貌似与上一个问题类似原因