论文全名:Deep Graph Normalizer: A Geometric Deep Learning Approach for Estimating Connectional Brain Templates
英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用
目录
2.3.1. Tensor Representation of Multi-view Brain Networks
2.3.2. Geometric Deep Learning Layers
2.3.3. CBT Construction Layer and Subject Normalization Loss Function
2.3.4. CBT Refinement After the Training
2.4.1. Connectomic Datasets and Model Hyperparameter Setting
2.4.2. CBT Representativeness Test
2.4.3. CBT Discriminativeness Reproducibility Test
2.4.4. Discovery of Most Discriminative Brain ROIs for Each Disordered Population
1. 省流版
1.1. 心得
(1)含有可视化流程推导
(2)?受试者还能再选少一点吗
2. 论文逐段精读
2.1. Abstract
①They mentioned the average connectome, connectional brain template (CBT)
②Challenge: existing population FCs are mostly linear based rather than non-linear
③They proposed a Deep Graph Normalizer (DGN), which fuses the multi-view brain networks and captures the non-linear mode across subjects
2.2. Introduction
①Multi-modal datasets such as Human Connectome Project (HCP) and Connectome Related to Human Disease (CRHD) promote the finding of CBT (healthy)
②Briefly introduced DGN
2.3. Proposed Method
①Overall framework:
②MVBNs are , where
means the
-th view of subject
.
③The same as the A) in figure, each subject can be represented as
([number of ROI, number of ROI, number of view])
④Cross validation: 5 fold
2.3.1. Tensor Representation of Multi-view Brain Networks
①Edge vector stores the view attributes
②The diagonal of the ROI matrix is set to 0
③Noded attributes matrix , they define it as identity matrix cuz there is no original node features
2.3.2. Geometric Deep Learning Layers
①Message passing operation:
where denotes the embedding/feature of ROI
at the layer
,
denotes the learnable parameter at the layer
,
represents the neighbours of ROI
,
is of course the bias term and
maps
to
(为什么会是这样的映射啊??),
denotes the weights.
2.3.3. CBT Construction Layer and Subject Normalization Loss Function
①Output obtained:
②They horizontally replicate to become
③Transposing to
④Calculating the element-wise absolute difference between and
. And I guess that, I suppose there is an
:
Then employ the replication operation...It actually becomes (it is ):
Doing a transpose operation and we get , it seems like...:
....a little bit different between the above ahh. And finally I suppose that the element-wise absolute difference will be:
Quite a magic~Then sum them up along -axis:
We get the final CBT (I like this color scheme a lot).
⑤⭐I suppose they proposed a new loss term which called Subject Normalization Loss (SNL) Optimization. THEY DON'T WANT TO CALCULATE LOSS ON THE SAME SAMPLES ALL THE TIME, so they randomly choose a subset to measure the loss (for regularization):
where ,
denotes the average brain connectivity weights of view
2.3.4. CBT Refinement After the Training
①They construct their CBT by select the median of all the CBTs
2.4. Results and Discussion
2.4.1. Connectomic Datasets and Model Hyperparameter Setting
(1)ABIDE
①Sample: 77, with 41 AD and 36 LMCI
②Features of subject: maximum principal curvature, the mean cortical thickness, the mean sulcal depth, and the average curvature
(2)ADNI GO
①Sample: 310 with 155 NC and 155 ASD
②Features of subject: cortical surface area, minimum principle area, and 6 cortical morphological brain networks extracted from the 4 aforementioned cortical measures
(3)Comprehensive settings
①Cross validation: 5 fold
②ROI: Desikan-Killiany atlas with 35 ROIs
③(?)Brain network: computing the pairwise absolute difference in cortical measurements between pairs of ROIs.
④Hyperparameter settings: by grid search
⑤Optimizer: Adam
⑥Learning rate: 0.0005
⑦Random samples in SNL: 10
2.4.2. CBT Representativeness Test
①Representativeness comparison between CBTs generated by the proposed model and netNorm:
2.4.3. CBT Discriminativeness Reproducibility Test
①Computing the most discriminative ROIs by calculating the absolute difference between
and
classes
②Overlapping discriminative ROIs of different models:
2.4.4. Discovery of Most Discriminative Brain ROIs for Each Disordered Population
①The most discriminative ROIs of ASD: left insula cortex, left superior temporal sulcus (STS) and right frontal pole
②The most discriminative ROIs of AD: the left temporal pole (TP) and right entorhinal cortex (EC)
2.5. Conclusion
They aim to design multi-modality, or geo deep learning (?) and topological loss constraints
3. 知识补充
3.1. geometric deep learning
(1)参考学习:几何深度学习(Geometric Deep Learning)技术 - 知乎 (zhihu.com)
(2)参考论文:Geometric Deep Learning: Going beyond Euclidean data | IEEE Journals & Magazine | IEEE Xplore
4. Reference
Gurbuz, M. B. & Rekik, I. (2020) 'Deep Graph Normalizer: A Geometric Deep Learning Approach for Estimating Connectional Brain Templates', MICCAI.