现象:
CREATE TABLE test911 (SN String,PN_CODE String);
Total time spent in this metastore function was greater than 1000ms : createTable_(Table, )=200091
Hive集群的 create/drop 操作时间基本都稳定在 200s 多一点。
分析:
HMS会实时向Sentry同步Notifications请求,当需要大批同步消息需要处理,后台线程处理不过来,消息压滞就会出现这个异常。
这个异常不影响集群正常使用,只是会导致create, drop 等操作慢,需要等待200s,等待的目的也是为了追上最新的id。
sentry 和 hive 的消息同步的元数据信息,现象是 sentry 元数据的 SENTRY_HMS_NOTIFICATION_ID 表一直没有更新,而 hive 元数据 NOTIFICATION_SEQUENCE 表一直在更新,也就是说 sentry 消费 HMS 端的信息不及时出现了滞后情况。此时 Hive HMS 便会出现 canary 异常,导致上面 create database 操作的时间基本都是在 200.**s 左右,这个参数是由 sentry.notification.sync.timeout.ms(200s) 参数控制的,这也能解释为什么 create/drop 操作时间都在 200s 多一点。
方案:
1、适当调小 sentry.notification.sync.timeout.ms 参数
该参数默认是 200s,调小该参数,可适当减小 create/drop/alter 等操作的等待时间,消息积压不多的情况可以选择这种方式让 sentry 自行消费处理掉。
Cloudera 修改 Sentry 服务的参数配置:sentry.notification.sync.timeout.ms = 10000
修改参数后重启Sentry服务,发现HMS出现canary异常后超时时间在10s多一点,说明参数生效。
2、手动修改记录
获取sentry元数据SENTRY_HMS_NOTIFICATION_ID信息,
mysql> use sentry;
mysql> select * from SENTRY_HMS_NOTIFICATION_ID order by NOTIFICATION_ID desc limit 2;
±----------------+
| NOTIFICATION_ID |
±----------------+
| 34044370 |
| 34044369 |
±----------------+
获取hive元数据的NOTIFICATION_SEQUENCE信息,
mysql> use metastore
mysql> select * from NOTIFICATION_SEQUENCE limit 1;
±-------±--------------+
| NNI_ID | NEXT_EVENT_ID |
±-------±--------------+
| 1 | 34047898 |
±-------±--------------+
若sentry滞后hive,HMS必会出现canary异常。
如果消息积压的太多,sentry慢慢消费的时间太长的话,可能一直追不上HMS的最新id,此时可以选择丢掉这些信息!
mysql> use sentry;
mysql> insert into SENTRY_HMS_NOTIFICATION_ID values(34047898);
mysql> select * from SENTRY_HMS_NOTIFICATION_ID order by NOTIFICATION_ID desc limit 2;
±----------------+
| NOTIFICATION_ID |
±----------------+
| 34047898 |
| 34044575 |
±----------------+
hive sql 详细执行过程:
2024-09-13 09:12:49,886 DEBUG org.apache.hadoop.security.UserGroupInformation: [pool-8-thread-200]: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
2024-09-13 09:12:49,887 DEBUG org.apache.hadoop.hive.ql.log.PerfLogger: [pool-8-thread-200]:
2024-09-13 09:12:49,887 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-8-thread-200]: 200: source:10.11.16.36 create_table: Table(tableName:test912, dbName:test, owner:root, createTime:1726189969, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:sn, type:string, comment:null), FieldSchema(name:pn_code, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=0, numRows=0, rawDataSize=0, COLUMN_STATS_ACCURATE={“BASIC_STATS”:“true”}, numFiles=0, numFilesErasureCoded=0}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{root=[PrivilegeGrantInfo(privilege:INSERT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-1, grantor:root, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false, ownerType:USER)
2024-09-13 09:12:49,887 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-8-thread-200]: ugi=root ip=10.11.16.36
cmd=source:10.11.16.36 create_table: Table(tableName:test912, dbName:test, owner:root, createTime:1726189969, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:sn, type:string, comment:null), FieldSchema(name:pn_code, type:string, comment:null)],
location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=0, numRows=0, rawDataSize=0, COLUMN_STATS_ACCURATE={“BASIC_STATS”:“true”}, numFiles=0, numFilesErasureCoded=0}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{root=[PrivilegeGrantInfo(privilege:INSERT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-1, grantor:root, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-1, grantor:root, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false, ownerType:USER)
2024-09-13 09:12:49,888 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 1, isActive = true at:
sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
2024-09-13 09:12:49,888 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 2, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore
G
e
t
H
e
l
p
e
r
.
s
t
a
r
t
(
O
b
j
e
c
t
S
t
o
r
e
.
j
a
v
a
:
3067
)
2024
−
09
−
1309
:
12
:
49
,
888
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
s
t
o
r
e
D
i
r
e
c
t
S
q
l
U
t
i
l
s
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
D
i
r
e
c
t
S
Q
L
q
u
e
r
y
i
n
0.156529
m
s
+
0.001045
m
s
,
t
h
e
q
u
e
r
y
i
s
[
S
E
T
@
@
s
e
s
s
i
o
n
.
s
q
l
m
o
d
e
=
A
N
S
I
Q
U
O
T
E
S
]
2024
−
09
−
1309
:
12
:
49
,
888
T
R
A
C
E
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
S
t
o
r
e
D
i
r
e
c
t
S
q
l
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
g
e
t
D
a
t
a
b
a
s
e
:
q
u
e
r
y
i
n
s
t
a
n
t
i
a
t
e
d
:
s
e
l
e
c
t
"
D
B
I
D
"
,
"
N
A
M
E
"
,
"
D
B
L
O
C
A
T
I
O
N
U
R
I
"
,
"
D
E
S
C
"
,
"
O
W
N
E
R
N
A
M
E
"
,
"
O
W
N
E
R
T
Y
P
E
"
,
"
C
R
E
A
T
E
T
I
M
E
"
F
R
O
M
"
D
B
S
"
w
h
e
r
e
"
N
A
M
E
"
=
?
w
i
t
h
p
a
r
a
m
[
t
e
s
t
]
2024
−
09
−
1309
:
12
:
49
,
889
T
R
A
C
E
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
S
t
o
r
e
D
i
r
e
c
t
S
q
l
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
g
e
t
D
a
t
a
b
a
s
e
:
q
u
e
r
y
2
i
n
s
t
a
n
t
i
a
t
e
d
:
s
e
l
e
c
t
"
P
A
R
A
M
K
E
Y
"
,
"
P
A
R
A
M
V
A
L
U
E
"
F
R
O
M
"
D
A
T
A
B
A
S
E
P
A
R
A
M
S
"
W
H
E
R
E
"
D
B
I
D
"
=
?
A
N
D
"
P
A
R
A
M
K
E
Y
"
I
S
N
O
T
N
U
L
L
w
i
t
h
p
a
r
a
m
[
253191
]
2024
−
09
−
1309
:
12
:
49
,
889
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
S
t
o
r
e
D
i
r
e
c
t
S
q
l
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
g
e
t
D
a
t
a
b
a
s
e
:
d
i
r
e
c
t
s
q
l
r
e
t
u
r
n
i
n
g
d
b
t
e
s
t
l
o
c
n
[
h
d
f
s
:
/
/
n
a
m
e
s
e
r
v
i
c
e
1
/
u
s
e
r
/
h
i
v
e
/
w
a
r
e
h
o
u
s
e
/
t
e
s
t
.
d
b
]
d
e
s
c
[
n
u
l
l
]
o
w
n
e
r
[
h
i
v
e
]
o
w
n
e
r
t
y
p
e
[
U
S
E
R
]
2024
−
09
−
1309
:
12
:
49
,
889
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
O
b
j
e
c
t
S
t
o
r
e
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
U
s
i
n
g
d
i
r
e
c
t
S
Q
L
o
p
t
i
m
i
z
a
t
i
o
n
.
2024
−
09
−
1309
:
12
:
49
,
889
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
O
b
j
e
c
t
S
t
o
r
e
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
C
o
m
m
i
t
t
r
a
n
s
a
c
t
i
o
n
:
c
o
u
n
t
=
1
,
i
s
a
c
t
i
v
e
t
r
u
e
a
t
:
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
O
b
j
e
c
t
S
t
o
r
e
GetHelper.start(ObjectStore.java:3067) 2024-09-13 09:12:49,888 DEBUG org.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils: [pool-8-thread-200]: Direct SQL query in 0.156529ms + 0.001045ms, the query is [SET @@session.sql_mode=ANSI_QUOTES] 2024-09-13 09:12:49,888 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query instantiated : select "DB_ID", "NAME", "DB_LOCATION_URI", "DESC", "OWNER_NAME", "OWNER_TYPE" , "CREATE_TIME"FROM "DBS" where "NAME" = ? with param [test] 2024-09-13 09:12:49,889 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query2 instantiated : select "PARAM_KEY", "PARAM_VALUE" FROM "DATABASE_PARAMS" WHERE "DB_ID" = ? AND "PARAM_KEY" IS NOT NULL with param [253191] 2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase: directsql returning db test locn[hdfs://nameservice1/user/hive/warehouse/test.db] desc [null] owner [hive] ownertype [USER] 2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Using direct SQL optimization. 2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at: org.apache.hadoop.hive.metastore.ObjectStore
GetHelper.start(ObjectStore.java:3067)2024−09−1309:12:49,888DEBUGorg.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils:[pool−8−thread−200]:DirectSQLqueryin0.156529ms+0.001045ms,thequeryis[SET@@session.sqlmode=ANSIQUOTES]2024−09−1309:12:49,888TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:queryinstantiated:select"DBID","NAME","DBLOCATIONURI","DESC","OWNERNAME","OWNERTYPE","CREATETIME"FROM"DBS"where"NAME"=?withparam[test]2024−09−1309:12:49,889TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:query2instantiated:select"PARAMKEY","PARAMVALUE"FROM"DATABASEPARAMS"WHERE"DBID"=?AND"PARAMKEY"ISNOTNULLwithparam[253191]2024−09−1309:12:49,889DEBUGorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:directsqlreturningdbtestlocn[hdfs://nameservice1/user/hive/warehouse/test.db]desc[null]owner[hive]ownertype[USER]2024−09−1309:12:49,889DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:UsingdirectSQLoptimization.2024−09−1309:12:49,889DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:Committransaction:count=1,isactivetrueat:org.apache.hadoop.hive.metastore.ObjectStoreGetHelper.commit(ObjectStore.java:3160)
2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: db details for db test retrieved using
SQL in 1.599198ms
2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 2, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1270)
2024-09-13 09:12:49,889 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 3, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1457)
2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 2, isactive true at:
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1471)
2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at:
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1272)
2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 2, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore
G
e
t
H
e
l
p
e
r
.
s
t
a
r
t
(
O
b
j
e
c
t
S
t
o
r
e
.
j
a
v
a
:
3067
)
2024
−
09
−
1309
:
12
:
49
,
890
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
s
t
o
r
e
D
i
r
e
c
t
S
q
l
U
t
i
l
s
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
D
i
r
e
c
t
S
Q
L
q
u
e
r
y
i
n
0.126999
m
s
+
6.9
E
−
4
m
s
,
t
h
e
q
u
e
r
y
i
s
[
S
E
T
@
@
s
e
s
s
i
o
n
.
s
q
l
m
o
d
e
=
A
N
S
I
Q
U
O
T
E
S
]
2024
−
09
−
1309
:
12
:
49
,
890
T
R
A
C
E
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
S
t
o
r
e
D
i
r
e
c
t
S
q
l
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
g
e
t
D
a
t
a
b
a
s
e
:
q
u
e
r
y
i
n
s
t
a
n
t
i
a
t
e
d
:
s
e
l
e
c
t
"
D
B
I
D
"
,
"
N
A
M
E
"
,
"
D
B
L
O
C
A
T
I
O
N
U
R
I
"
,
"
D
E
S
C
"
,
"
O
W
N
E
R
N
A
M
E
"
,
"
O
W
N
E
R
T
Y
P
E
"
,
"
C
R
E
A
T
E
T
I
M
E
"
F
R
O
M
"
D
B
S
"
w
h
e
r
e
"
N
A
M
E
"
=
?
w
i
t
h
p
a
r
a
m
[
t
e
s
t
]
2024
−
09
−
1309
:
12
:
49
,
890
T
R
A
C
E
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
S
t
o
r
e
D
i
r
e
c
t
S
q
l
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
g
e
t
D
a
t
a
b
a
s
e
:
q
u
e
r
y
2
i
n
s
t
a
n
t
i
a
t
e
d
:
s
e
l
e
c
t
"
P
A
R
A
M
K
E
Y
"
,
"
P
A
R
A
M
V
A
L
U
E
"
F
R
O
M
"
D
A
T
A
B
A
S
E
P
A
R
A
M
S
"
W
H
E
R
E
"
D
B
I
D
"
=
?
A
N
D
"
P
A
R
A
M
K
E
Y
"
I
S
N
O
T
N
U
L
L
w
i
t
h
p
a
r
a
m
[
253191
]
2024
−
09
−
1309
:
12
:
49
,
891
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
M
e
t
a
S
t
o
r
e
D
i
r
e
c
t
S
q
l
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
g
e
t
D
a
t
a
b
a
s
e
:
d
i
r
e
c
t
s
q
l
r
e
t
u
r
n
i
n
g
d
b
t
e
s
t
l
o
c
n
[
h
d
f
s
:
/
/
n
a
m
e
s
e
r
v
i
c
e
1
/
u
s
e
r
/
h
i
v
e
/
w
a
r
e
h
o
u
s
e
/
t
e
s
t
.
d
b
]
d
e
s
c
[
n
u
l
l
]
o
w
n
e
r
[
h
i
v
e
]
o
w
n
e
r
t
y
p
e
[
U
S
E
R
]
2024
−
09
−
1309
:
12
:
49
,
891
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
O
b
j
e
c
t
S
t
o
r
e
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
U
s
i
n
g
d
i
r
e
c
t
S
Q
L
o
p
t
i
m
i
z
a
t
i
o
n
.
2024
−
09
−
1309
:
12
:
49
,
891
D
E
B
U
G
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
O
b
j
e
c
t
S
t
o
r
e
:
[
p
o
o
l
−
8
−
t
h
r
e
a
d
−
200
]
:
C
o
m
m
i
t
t
r
a
n
s
a
c
t
i
o
n
:
c
o
u
n
t
=
1
,
i
s
a
c
t
i
v
e
t
r
u
e
a
t
:
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
i
v
e
.
m
e
t
a
s
t
o
r
e
.
O
b
j
e
c
t
S
t
o
r
e
GetHelper.start(ObjectStore.java:3067) 2024-09-13 09:12:49,890 DEBUG org.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils: [pool-8-thread-200]: Direct SQL query in 0.126999ms + 6.9E-4ms, the query is [SET @@session.sql_mode=ANSI_QUOTES] 2024-09-13 09:12:49,890 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query instantiated : select "DB_ID", "NAME", "DB_LOCATION_URI", "DESC", "OWNER_NAME", "OWNER_TYPE" , "CREATE_TIME"FROM "DBS" where "NAME" = ? with param [test] 2024-09-13 09:12:49,890 TRACE org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase:query2 instantiated : select "PARAM_KEY", "PARAM_VALUE" FROM "DATABASE_PARAMS" WHERE "DB_ID" = ? AND "PARAM_KEY" IS NOT NULL with param [253191] 2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-8-thread-200]: getDatabase: directsql returning db test locn[hdfs://nameservice1/user/hive/warehouse/test.db] desc [null] owner [hive] ownertype [USER] 2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Using direct SQL optimization. 2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at: org.apache.hadoop.hive.metastore.ObjectStore
GetHelper.start(ObjectStore.java:3067)2024−09−1309:12:49,890DEBUGorg.apache.hadoop.hive.metastore.MetastoreDirectSqlUtils:[pool−8−thread−200]:DirectSQLqueryin0.126999ms+6.9E−4ms,thequeryis[SET@@session.sqlmode=ANSIQUOTES]2024−09−1309:12:49,890TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:queryinstantiated:select"DBID","NAME","DBLOCATIONURI","DESC","OWNERNAME","OWNERTYPE","CREATETIME"FROM"DBS"where"NAME"=?withparam[test]2024−09−1309:12:49,890TRACEorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:query2instantiated:select"PARAMKEY","PARAMVALUE"FROM"DATABASEPARAMS"WHERE"DBID"=?AND"PARAMKEY"ISNOTNULLwithparam[253191]2024−09−1309:12:49,891DEBUGorg.apache.hadoop.hive.metastore.MetaStoreDirectSql:[pool−8−thread−200]:getDatabase:directsqlreturningdbtestlocn[hdfs://nameservice1/user/hive/warehouse/test.db]desc[null]owner[hive]ownertype[USER]2024−09−1309:12:49,891DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:UsingdirectSQLoptimization.2024−09−1309:12:49,891DEBUGorg.apache.hadoop.hive.metastore.ObjectStore:[pool−8−thread−200]:Committransaction:count=1,isactivetrueat:org.apache.hadoop.hive.metastore.ObjectStoreGetHelper.commit(ObjectStore.java:3160)
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: db details for db test retrieved using
SQL in 0.717265ms
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: Looking for FS supporting hdfs
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: looking for configuration option fs.hdfs.impl
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: Looking in service filesystems for implementation class
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.fs.FileSystem: [pool-8-thread-200]: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.use.legacy.blockreader.local = false
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.read.shortcircuit = false
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.domain.socket.data.traffic = false
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
2024-09-13 09:12:49,891 DEBUG org.apache.hadoop.hdfs.DFSClient: [pool-8-thread-200]: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2024-09-13 09:12:49,892 TRACE org.apache.hadoop.security.SecurityUtil: [pool-8-thread-200]: Name lookup for IT-CDH-Node01 took 0 ms.
2024-09-13 09:12:49,892 TRACE org.apache.hadoop.security.SecurityUtil: [pool-8-thread-200]: Name lookup for IT-CDH-Node02 took 0 ms.
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.HAUtilClient: [pool-8-thread-200]: No HA service delegation token found for logical URI hdfs://nameservice1/user/hive/warehouse/test.db/test912
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.use.legacy.blockreader.local = false
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.read.shortcircuit = false
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.client.domain.socket.data.traffic = false
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.client.impl.DfsClientConf: [pool-8-thread-200]: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.io.retry.RetryUtils: [pool-8-thread-200]: multipleLinearRandomRetry = null
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: getting client out of cache: org.apache.hadoop.ipc.Client@a4c506
2024-09-13 09:12:49,892 DEBUG org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil: [pool-8-thread-200]: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
2024-09-13 09:12:49,893 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node01/10.11.16.35:8020: getFileInfo {src: “/user/hive/warehouse/test.db/test912”}
2024-09-13 09:12:49,893 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: The ping interval is 60000 ms.
2024-09-13 09:12:49,893 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: Connecting to IT-CDH-Node01/10.11.16.35:8020
2024-09-13 09:12:49,895 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Exception <- IT-CDH-Node01/10.11.16.35:8020: getFileInfo {org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode
N
a
m
e
N
o
d
e
H
A
C
o
n
t
e
x
t
.
c
h
e
c
k
O
p
e
r
a
t
i
o
n
(
N
a
m
e
N
o
d
e
.
j
a
v
a
:
1962
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
F
S
N
a
m
e
s
y
s
t
e
m
.
c
h
e
c
k
O
p
e
r
a
t
i
o
n
(
F
S
N
a
m
e
s
y
s
t
e
m
.
j
a
v
a
:
1421
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
F
S
N
a
m
e
s
y
s
t
e
m
.
g
e
t
F
i
l
e
I
n
f
o
(
F
S
N
a
m
e
s
y
s
t
e
m
.
j
a
v
a
:
3055
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
a
m
e
N
o
d
e
R
p
c
S
e
r
v
e
r
.
g
e
t
F
i
l
e
I
n
f
o
(
N
a
m
e
N
o
d
e
R
p
c
S
e
r
v
e
r
.
j
a
v
a
:
1151
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
p
r
o
t
o
c
o
l
P
B
.
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
S
e
r
v
e
r
S
i
d
e
T
r
a
n
s
l
a
t
o
r
P
B
.
g
e
t
F
i
l
e
I
n
f
o
(
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
S
e
r
v
e
r
S
i
d
e
T
r
a
n
s
l
a
t
o
r
P
B
.
j
a
v
a
:
940
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
p
r
o
t
o
c
o
l
.
p
r
o
t
o
.
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
P
r
o
t
o
s
NameNodeHAContext.checkOperation(NameNode.java:1962) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos
NameNodeHAContext.checkOperation(NameNode.java:1962)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol
2.
c
a
l
l
B
l
o
c
k
i
n
g
M
e
t
h
o
d
(
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
P
r
o
t
o
s
.
j
a
v
a
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine
2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer
P
r
o
t
o
B
u
f
R
p
c
I
n
v
o
k
e
r
.
c
a
l
l
(
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
.
j
a
v
a
:
523
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
R
P
C
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server
R
p
c
C
a
l
l
.
r
u
n
(
S
e
r
v
e
r
.
j
a
v
a
:
869
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
S
e
r
v
e
r
RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server
RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.ServerKaTeX parse error: Expected 'EOF', got '}' at position 31: …ver.java:2675) }̲ 2024-09-13 09:…NameNodeHAContext.checkOperation(NameNode.java:1962)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol
2.
c
a
l
l
B
l
o
c
k
i
n
g
M
e
t
h
o
d
(
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
P
r
o
t
o
s
.
j
a
v
a
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine
2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer
P
r
o
t
o
B
u
f
R
p
c
I
n
v
o
k
e
r
.
c
a
l
l
(
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
.
j
a
v
a
:
523
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
R
P
C
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server
R
p
c
C
a
l
l
.
r
u
n
(
S
e
r
v
e
r
.
j
a
v
a
:
869
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
S
e
r
v
e
r
RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server
RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1445) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1355) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875) ~[hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source) [?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1630) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1496) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:476) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1548) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1615) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at sun.reflect.GeneratedMethodAccessor173.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at com.sun.proxy.$Proxy28.create_table_with_environment_context(Unknown Source) [?:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10993) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10977) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_202]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
2024-09-13 09:12:49,898 DEBUG org.apache.hadoop.io.retry.RetryInvocationHandler: [pool-8-thread-200]: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode
N
a
m
e
N
o
d
e
H
A
C
o
n
t
e
x
t
.
c
h
e
c
k
O
p
e
r
a
t
i
o
n
(
N
a
m
e
N
o
d
e
.
j
a
v
a
:
1962
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
F
S
N
a
m
e
s
y
s
t
e
m
.
c
h
e
c
k
O
p
e
r
a
t
i
o
n
(
F
S
N
a
m
e
s
y
s
t
e
m
.
j
a
v
a
:
1421
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
F
S
N
a
m
e
s
y
s
t
e
m
.
g
e
t
F
i
l
e
I
n
f
o
(
F
S
N
a
m
e
s
y
s
t
e
m
.
j
a
v
a
:
3055
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
a
m
e
N
o
d
e
R
p
c
S
e
r
v
e
r
.
g
e
t
F
i
l
e
I
n
f
o
(
N
a
m
e
N
o
d
e
R
p
c
S
e
r
v
e
r
.
j
a
v
a
:
1151
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
p
r
o
t
o
c
o
l
P
B
.
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
S
e
r
v
e
r
S
i
d
e
T
r
a
n
s
l
a
t
o
r
P
B
.
g
e
t
F
i
l
e
I
n
f
o
(
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
S
e
r
v
e
r
S
i
d
e
T
r
a
n
s
l
a
t
o
r
P
B
.
j
a
v
a
:
940
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
p
r
o
t
o
c
o
l
.
p
r
o
t
o
.
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
P
r
o
t
o
s
NameNodeHAContext.checkOperation(NameNode.java:1962) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos
NameNodeHAContext.checkOperation(NameNode.java:1962)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol
2.
c
a
l
l
B
l
o
c
k
i
n
g
M
e
t
h
o
d
(
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
P
r
o
t
o
s
.
j
a
v
a
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine
2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer
P
r
o
t
o
B
u
f
R
p
c
I
n
v
o
k
e
r
.
c
a
l
l
(
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
.
j
a
v
a
:
523
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
R
P
C
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server
R
p
c
C
a
l
l
.
r
u
n
(
S
e
r
v
e
r
.
j
a
v
a
:
869
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
S
e
r
v
e
r
RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server
RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Server
H
a
n
d
l
e
r
.
r
u
n
(
S
e
r
v
e
r
.
j
a
v
a
:
2675
)
,
w
h
i
l
e
i
n
v
o
k
i
n
g
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
T
r
a
n
s
l
a
t
o
r
P
B
.
g
e
t
F
i
l
e
I
n
f
o
o
v
e
r
I
T
−
C
D
H
−
N
o
d
e
01
/
10.11.16.35
:
8020.
T
r
y
i
n
g
t
o
f
a
i
l
o
v
e
r
i
m
m
e
d
i
a
t
e
l
y
.
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
R
e
m
o
t
e
E
x
c
e
p
t
i
o
n
:
O
p
e
r
a
t
i
o
n
c
a
t
e
g
o
r
y
R
E
A
D
i
s
n
o
t
s
u
p
p
o
r
t
e
d
i
n
s
t
a
t
e
s
t
a
n
d
b
y
.
V
i
s
i
t
h
t
t
p
s
:
/
/
s
.
a
p
a
c
h
e
.
o
r
g
/
s
b
n
n
−
e
r
r
o
r
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
h
a
.
S
t
a
n
d
b
y
S
t
a
t
e
.
c
h
e
c
k
O
p
e
r
a
t
i
o
n
(
S
t
a
n
d
b
y
S
t
a
t
e
.
j
a
v
a
:
88
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
a
m
e
N
o
d
e
Handler.run(Server.java:2675) , while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over IT-CDH-Node01/10.11.16.35:8020. Trying to failover immediately. org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88) at org.apache.hadoop.hdfs.server.namenode.NameNode
Handler.run(Server.java:2675),whileinvokingClientNamenodeProtocolTranslatorPB.getFileInfooverIT−CDH−Node01/10.11.16.35:8020.Tryingtofailoverimmediately.org.apache.hadoop.ipc.RemoteException:OperationcategoryREADisnotsupportedinstatestandby.Visithttps://s.apache.org/sbnn−erroratorg.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)atorg.apache.hadoop.hdfs.server.namenode.NameNodeNameNodeHAContext.checkOperation(NameNode.java:1962)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1421)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3055)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol
2.
c
a
l
l
B
l
o
c
k
i
n
g
M
e
t
h
o
d
(
C
l
i
e
n
t
N
a
m
e
n
o
d
e
P
r
o
t
o
c
o
l
P
r
o
t
o
s
.
j
a
v
a
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine
2.callBlockingMethod(ClientNamenodeProtocolProtos.java)atorg.apache.hadoop.ipc.ProtobufRpcEngineServer
P
r
o
t
o
B
u
f
R
p
c
I
n
v
o
k
e
r
.
c
a
l
l
(
P
r
o
t
o
b
u
f
R
p
c
E
n
g
i
n
e
.
j
a
v
a
:
523
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
R
P
C
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC
ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)atorg.apache.hadoop.ipc.RPCServer.call(RPC.java:991)
at org.apache.hadoop.ipc.Server
R
p
c
C
a
l
l
.
r
u
n
(
S
e
r
v
e
r
.
j
a
v
a
:
869
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
i
p
c
.
S
e
r
v
e
r
RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server
RpcCall.run(Server.java:869)atorg.apache.hadoop.ipc.ServerRpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1445) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1355) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875) ~[hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source) [?:?]
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1630) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1496) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1493) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1508) [hadoop-hdfs-client-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:476) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1548) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1615) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at sun.reflect.GeneratedMethodAccessor173.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at com.sun.proxy.$Proxy28.create_table_with_environment_context(Unknown Source) [?:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10993) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10977) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_202]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.3.1.jar:?]
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-2.1.1-cdh6.3.1.jar:2.1.1-cdh6.3.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
2024-09-13 09:12:49,901 TRACE org.apache.hadoop.io.retry.RetryInvocationHandler: [pool-8-thread-200]: #995 processRetryInfo: retryInfo=RetryInfo{retryTime=49501575527, delay=0, action=RetryAction(action=FAILOVER_AND_RETRY, delayMillis=0, reason=null), expectedFailoverCount=0, failException=null}, waitTime=-3
2024-09-13 09:12:49,901 DEBUG org.apache.hadoop.io.retry.RetryUtils: [pool-8-thread-200]: multipleLinearRandomRetry = null
2024-09-13 09:12:49,901 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: getting client out of cache: org.apache.hadoop.ipc.Client@a4c506
2024-09-13 09:12:49,901 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getFileInfo {src: “/user/hive/warehouse/test.db/test912”}
2024-09-13 09:12:49,901 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: The ping interval is 60000 ms.
2024-09-13 09:12:49,901 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: Connecting to IT-CDH-Node02/10.11.16.36:8020
2024-09-13 09:12:49,903 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getFileInfo took 2ms
2024-09-13 09:12:49,903 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getFileInfo {}
2024-09-13 09:12:49,903 INFO org.apache.hadoop.hive.common.FileUtils: [pool-8-thread-200]: Creating directory if it doesn’t exist: hdfs://nameservice1/user/hive/warehouse/test.db/test912
024-09-13 09:12:49,903 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getFileInfo {src: “/user/hive/warehouse/test.db/test912”}
2024-09-13 09:12:49,903 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #996 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
2024-09-13 09:12:49,904 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getFileInfo took 1ms
2024-09-13 09:12:49,904 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getFileInfo {}
2024-09-13 09:12:49,904 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getFileInfo {src: “/user/hive/warehouse/test.db/test912”}
2024-09-13 09:12:49,904 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #997 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
2024-09-13 09:12:49,905 DEBUG org.apache.hadoop.ipc.Client: [IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root got value #997
2024-09-13 09:12:49,905 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getFileInfo took 1ms
2024-09-13 09:12:49,905 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getFileInfo {}
2024-09-13 09:12:49,905 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getFileInfo {src: “/user/hive/warehouse/test.db”}
2024-09-13 09:12:49,905 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #998 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
2024-09-13 09:12:49,906 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getFileInfo took 1ms
2024-09-13 09:12:49,906 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getFileInfo {fs { fileType: IS_DIR path: “” length: 0 permission { perm: 511 } owner: “hive” group: “hive” modification_time: 1726189724079 access_time: 0 block_replication: 0 blocksize: 0 fileId: 1435343893 childrenNum: 54 storagePolicy: 0 flags: 0 }}
2024-09-13 09:12:49,906 DEBUG org.apache.hadoop.hdfs.DFSClient: [pool-8-thread-200]: /user/hive/warehouse/test.db/test912: masked={ masked: rwxr-xr-x, unmasked: rwxrwxrwx }
2024-09-13 09:12:49,906 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: mkdirs {src: “/user/hive/warehouse/test.db/test912” masked { perm: 493 } createParent: true unmasked { perm: 511 }}
2024-09-13 09:12:49,906 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #999 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs
2024-09-13 09:12:49,907 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: mkdirs took 1ms
2024-09-13 09:12:49,907 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: mkdirs {result: true}
2024-09-13 09:12:49,907 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getFileInfo {src: “/user/hive/warehouse/test.db”}
2024-09-13 09:12:49,908 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getFileInfo took 1ms
2024-09-13 09:12:49,908 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getFileInfo {fs { fileType: IS_DIR path: “” length: 0 permission { perm: 511 } owner: “hive” group: “hive” modification_time: 1726189969907 access_time: 0 block_replication: 0 blocksize: 0 fileId: 1435343893 childrenNum: 55 storagePolicy: 0 flags: 0 }}
2024-09-13 09:12:49,908 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getAclStatus {src: “/user/hive/warehouse/test.db”}
2024-09-13 09:12:49,908 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #1001 org.apache.hadoop.hdfs.protocol.ClientProtocol.getAclStatus
2024-09-13 09:12:49,909 DEBUG org.apache.hadoop.ipc.Client: [IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root got value #1001
2024-09-13 09:12:49,909 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getAclStatus took 1ms
2024-09-13 09:12:49,909 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getAclStatus {result { owner: “hive” group: “hive” sticky: false permission { perm: 511 } }}
2024-09-13 09:12:49,909 TRACE shims.HdfsUtils: [pool-8-thread-200]: owner: hive, group: hive, acl: {entries: [], stickyBit: false}
2024-09-13 09:12:49,909 DEBUG shims.HdfsUtils: [pool-8-thread-200]: {-chgrp,-R,hive,hdfs://nameservice1/user/hive/warehouse/test.db/test912}
2024-09-13 09:12:49,910 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getFileInfo {src: “/user/hive/warehouse/test.db/test912”}
2024-09-13 09:12:49,910 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #1002 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
2024-09-13 09:12:49,911 DEBUG org.apache.hadoop.ipc.Client: [IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root got value #1002
2024-09-13 09:12:49,911 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getFileInfo took 1ms
2024-09-13 09:12:49,911 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getFileInfo {fs { fileType: IS_DIR path: “” length: 0 permission { perm: 493 } owner: “root” group: “hive” modification_time: 1726189969907 access_time: 0 block_replication: 0 blocksize: 0 fileId: 2380487453 childrenNum: 0 storagePolicy: 0 flags: 0 }}
2024-09-13 09:12:49,911 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getListing {src: “/user/hive/warehouse/test.db/test912” startAfter: “” needLocation: false}
2024-09-13 09:12:49,911 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #1003 org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing
2024-09-13 09:12:49,912 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getListing took 1ms
2024-09-13 09:12:49,912 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getListing {dirList { remainingEntries: 0 }}
2024-09-13 09:12:49,912 DEBUG shims.HdfsUtils: [pool-8-thread-200]: Return value is :0
2024-09-13 09:12:49,912 DEBUG shims.HdfsUtils: [pool-8-thread-200]: {-setfacl,-R,–set,user::rwx,group::rwx,other::rwx,hdfs://nameservice1/user/hive/warehouse/test.db/test912}
2024-09-13 09:12:49,912 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getFileInfo {src: “/user/hive/warehouse/test.db/test912”}
2024-09-13 09:12:49,913 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #1004 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
2024-09-13 09:12:49,913 DEBUG org.apache.hadoop.ipc.Client: [IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root got value #1004
2024-09-13 09:12:49,913 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getFileInfo took 1ms
2024-09-13 09:12:49,913 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getFileInfo {fs { fileType: IS_DIR path: “” length: 0 permission { perm: 493 } owner: “root” group: “hive” modification_time: 1726189969907 access_time: 0 block_replication: 0 blocksize: 0 fileId: 2380487453 childrenNum: 0 storagePolicy: 0 flags: 0 }}
2024-09-13 09:12:49,913 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: setAcl {src: “/user/hive/warehouse/test.db/test912” aclSpec { type: USER scope: ACCESS permissions: PERM_ALL } aclSpec { type: GROUP scope: ACCESS permissions: PERM_ALL } aclSpec { type: OTHER scope: ACCESS permissions: PERM_ALL }}
2024-09-13 09:12:49,915 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: setAcl took 2ms
2024-09-13 09:12:49,915 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: setAcl {}
2024-09-13 09:12:49,915 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Call -> IT-CDH-Node02/10.11.16.36:8020: getListing {src: “/user/hive/warehouse/test.db/test912” startAfter: “” needLocation: false}
2024-09-13 09:12:49,915 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #1]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root sending #1006 org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing
2024-09-13 09:12:49,915 DEBUG org.apache.hadoop.ipc.Client: [IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root]: IPC Client (699441216) connection to IT-CDH-Node02/10.11.16.36:8020 from root got value #1006
2024-09-13 09:12:49,915 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: Call: getListing took 0ms
2024-09-13 09:12:49,915 TRACE org.apache.hadoop.ipc.ProtobufRpcEngine: [pool-8-thread-200]: 417: Response <- IT-CDH-Node02/10.11.16.36:8020: getListing {dirList { remainingEntries: 0 }}
2024-09-13 09:12:49,915 DEBUG shims.HdfsUtils: [pool-8-thread-200]: Return value is :0
2024-09-13 09:12:49,915 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-195]: Commit transaction: count = 0, isactive true at:
org.apache.hadoop.hive.metastore.ObjectStore.getIndexes(ObjectStore.java:4291)
2024-09-13 09:12:49,915 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 2, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1106)
2024-09-13 09:12:49,915 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Open transaction: count = 3, isActive = true at:
org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:798)
2024-09-13 09:12:49,931 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at:
org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125)
2024-09-13 09:12:49,931 DEBUG org.apache.hive.hcatalog.listener.DbNotificationListener: [pool-8-thread-200]: DbNotificationListener: Processing : 0:{“server”:“”,“servicePrincipal”:“”,“db”:“test”,“table”:“test912”,“tableObjJson”:“{“1”:{“str”:“test912”},“2”:{“str”:“test”},“3”:{“str”:“root”},“4”:{“i32”:1726189969},“5”:{“i32”:0},“6”:{“i32”:0},“7”:{“rec”:{“1”:{“lst”:[“rec”,2,{“1”:{“str”:“sn”},“2”:{“str”:“string”}},{“1”:{“str”:“pn_code”},“2”:{“str”:“string”}}]},“2”:{“str”:“hdfs://nameservice1/user/hive/warehouse/test.db/test912”},“3”:{“str”:“org.apache.hadoop.mapred.TextInputFormat”},“4”:{“str”:“org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat”},“5”:{“tf”:0},“6”:{“i32”:-1},“7”:{“rec”:{“2”:{“str”:“org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe”},“3”:{“map”:[“str”,“str”,1,{“serialization.format”:“1”}]}}},“8”:{“lst”:[“str”,0]},“9”:{“lst”:[“rec”,0]},“10”:{“map”:[“str”,“str”,0,{}]},“11”:{“rec”:{“1”:{“lst”:[“str”,0]},“2”:{“lst”:[“lst”,0]},“3”:{“map”:[“lst”,“str”,0,{}]}}},“12”:{“tf”:0}}},“8”:{“lst”:[“rec”,0]},“9”:{“map”:[“str”,“str”,7,{“totalSize”:“0”,“numRows”:“0”,“rawDataSize”:“0”,“COLUMN_STATS_ACCURATE”:”{\“BASIC_STATS\”:\“true\”}“,“numFiles”:“0”,“transient_lastDdlTime”:“1726189969”,“numFilesErasureCoded”:“0”}]},“12”:{“str”:“MANAGED_TABLE”},“13”:{“rec”:{“1”:{“map”:[“str”,“lst”,1,{“root”:[“rec”,4,{“1”:{“str”:“INSERT”},“2”:{“i32”:-1},“3”:{“str”:“root”},“4”:{“i32”:1},“5”:{“tf”:1}},{“1”:{“str”:“SELECT”},“2”:{“i32”:-1},“3”:{“str”:“root”},“4”:{“i32”:1},“5”:{“tf”:1}},{“1”:{“str”:“UPDATE”},“2”:{“i32”:-1},“3”:{“str”:“root”},“4”:{“i32”:1},“5”:{“tf”:1}},{“1”:{“str”:“DELETE”},“2”:{“i32”:-1},“3”:{“str”:“root”},“4”:{“i32”:1},“5”:{“tf”:1}}]}]}}},“14”:{“tf”:0},“15”:{“i32”:1}}”,“timestamp”:1726189969,“location”:“hdfs://nameservice1/user/hive/warehouse/test.db/test912”,“ownerType”:“USER”,“ownerName”:“root”,“tableType”:“MANAGED_TABLE”}
2024-09-13 09:12:49,932 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 1, isactive true at:
org.apache.hadoop.hive.metastore.ObjectStore.addNotificationEvent(ObjectStore.java:8505)
2024-09-13 09:12:49,932 DEBUG org.apache.hadoop.hive.metastore.ObjectStore: [pool-8-thread-200]: Commit transaction: count = 0, isactive true at:
sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
2024-09-13 09:12:49,936 DEBUG org.apache.sentry.binding.metastore.SentrySyncHMSNotificationsPostEventListener: [pool-8-thread-200]: Notifying sentry about Notification for CREATE_TABLE (id: 34044177)
2024-09-13 09:12:49,938 DEBUG org.apache.sentry.core.common.transport.SentryTransportPool: [pool-8-thread-200]: [1] obtained transport IT-CDH-Node02:8038
2024-09-13 09:12:49,938 DEBUG org.apache.sentry.core.common.transport.SentryTransportPool: [pool-8-thread-200]: Currently 19 active connections, 31 idle connections
2024-09-13 09:12:49,938 DEBUG org.apache.sentry.core.common.transport.RetryClientInvocationHandler: [pool-8-thread-200]: Calling notifyHmsEvent
问题点出现!!!
2024-09-13 09:16:09,939 DEBUG org.apache.sentry.binding.metastore.SentrySyncHMSNotificationsPostEventListener: [pool-8-thread-200]: Finished Notifying sentry about Notification for CREATE_TABLE (id: 34044177)
2024-09-13 09:16:09,939 DEBUG org.apache.sentry.binding.metastore.SentrySyncHMSNotificationsPostEventListener: [pool-8-thread-200]: Latest processed event ID returned by the Sentry server: 0
2024-09-13 09:16:09,939 DEBUG org.apache.sentry.core.common.transport.SentryTransportPool: [pool-8-thread-200]: [1] returning IT-CDH-Node02:8038
2024-09-13 09:16:09,939 DEBUG org.apache.hadoop.hive.ql.log.PerfLogger: [pool-8-thread-200]: </PERFLOG method=create_table_with_environment_context start=1726189969887 end=1726190169939 duration=200052 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=200
retryCount=0 error=false>
2024-09-13 09:16:09,939 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: stopping client from cache: org.apache.hadoop.ipc.Client@a4c506
2024-09-13 09:16:09,939 DEBUG org.apache.hadoop.ipc.Client: [pool-8-thread-200]: stopping client from cache: org.apache.hadoop.ipc.Client@a4c506