#paper/ccfB
推荐指数: #paper/⭐ #pp/图结构学习
流程
重定义同配性指标:
N
H
i
k
=
∣
N
(
i
,
k
,
c
m
a
x
)
∣
∣
N
(
i
,
k
)
∣
with
c
m
a
x
=
arg
max
c
∈
[
1
,
C
]
∣
N
(
i
,
k
,
c
)
∣
NH_i^k=\frac{|\mathcal{N}(i,k,c_{max})|}{|\mathcal{N}(i,k)|}\quad\text{with}\quad c_{max}=\arg\max_{c\in[1,C]}|\mathcal{N}(i,k,c)|
NHik=∣N(i,k)∣∣N(i,k,cmax)∣withcmax=argc∈[1,C]max∣N(i,k,c)∣
其中,
N
(
i
,
k
)
=
{
v
∣
(
0
o
r
1
)
≤
S
h
o
r
t
e
s
t
P
a
t
h
(
v
i
,
v
)
≤
k
}
\mathcal{N}(i,k) = \{v | (0\mathrm{~or~}1) \leq \mathrm{ShortestPath}(v_{i},v) \leq k\}
N(i,k)={v∣(0 or 1)≤ShortestPath(vi,v)≤k}。(相当于重定义了N(i,k))。
N
(
i
,
k
,
c
)
=
{
v
j
∣
v
j
∈
N
(
i
,
k
)
,
y
j
=
c
}
N(i,k,c)=\{v_{j}|v_{j}\in\mathcal{N}(i,k),y_{j}=c\}
N(i,k,c)={vj∣vj∈N(i,k),yj=c}.
划分高通和低通掩码
m i l o w = I ( N H i k < T ) , m i h i g h = I ( N H i k ≥ T ) m_i^{\mathrm{low}}=\mathbb{I}(NH_i^k<T) ,\quad m_i^{\mathrm{high}}=\mathbb{I}(NH_i^k\geq T) milow=I(NHik<T),mihigh=I(NHik≥T)
backbone
H
s
=
A
~
mask
⋅
ReLU
(
A
~
mask
⋅
X
W
0
s
)
⋅
W
1
s
,
A
mask
=
A
⊙
m
s
H^s=\widetilde{A}_\text{mask}\cdot\text{ReLU}\left(\widetilde{A}_\text{mask}\cdot XW_0^s\right)\cdot W_1^s,\quad A_\text{mask}=A\odot\boldsymbol{m}^s
Hs=A
mask⋅ReLU(A
mask⋅XW0s)⋅W1s,Amask=A⊙ms
B
=
s
o
f
t
m
a
x
(
(
α
l
o
w
H
l
o
w
+
α
h
i
g
h
H
h
i
g
h
+
α
x
H
x
)
⋅
W
o
)
\boldsymbol{B}=\mathrm{softmax}\left(\left(\alpha_\mathrm{low}\boldsymbol{H}^\mathrm{low}+\alpha_\mathrm{high}\boldsymbol{H}^\mathrm{high}+\alpha_\mathrm{x}\boldsymbol{H}_\mathrm{x}\right)\cdot\boldsymbol{W}_\mathrm{o}\right)
B=softmax((αlowHlow+αhighHhigh+αxHx)⋅Wo)
总结:
这篇总的来说就定义了一个同配性比,并没有什么新意。其在区分高通低通掩码的时候引入了阈值(超参),多了这个超参,就有点违背有监督、半监督分类的趋势了。