个人笔记对模型数学上的解读部分很大程度上受到这篇博客的启发与参考
Notation
T = S ∪ Q T=S \cup Q T=S∪Q,support set and query set, support set S S S in each episode serves as the labeled training set
x i x_i xi and y i ∈ { C 1 , . . . , C N } = C T ⊂ C y_i \in \{C_1,...,C_N\}=C_T \subset C yi∈{
C1,...,CN}=CT⊂C: i i i th input data and its label, C C C is the set of all
classes of either training or test dataset. C t r a i n ∩ C t e s t = ϕ C_{train} \cap C_{test}=\phi Ctrain∩Ctest=ϕ
G = ( V ; ξ ; T ) G=(V;\xi;T) G=(V;ξ;T): the graph constructed with samples from the task T T T. V:node set; E: edge set;
y i j y_{ij} yij ground-truth edge-label, defined by the ground-truth node labels
e i j = { e i j d } d = 1 2 ∈ [ 0 , 1 ] 2 \mathbf e_{ij}=\{e_{ijd}\}^2_{d=1} \in [0,1]^2 eij={
eijd}d=12∈[0,1]2: edge feature, representing the (normalized) strengths of the intra- and inter-class relations of the two connected nodes.
e ~ i j d \tilde e_{ijd} e~ijd 归一化的edge feature
f v l f^l_v fvl node feature transformation network
f e ; f^;_e fe; metric network
y ^ i j \hat y_{ij} y^ij probability that the two nodes V i V_i V