[论文精读]Deep Graph Normalizer: A Geometric Deep Learning Approach for Estimating Connectional Brain

论文全名:Deep Graph Normalizer: A Geometric Deep Learning Approach for Estimating Connectional Brain Templates

论文代码:GitHub - basiralab/DGN: How to fuse a population of graphs into a single one using graph neural networks?

论文网址:Deep Graph Normalizer: A Geometric Deep Learning Approach for Estimating Connectional Brain Templates | SpringerLink

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 省流版

1.1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Proposed Method

2.3.1. Tensor Representation of Multi-view Brain Networks

2.3.2. Geometric Deep Learning Layers

2.3.3. CBT Construction Layer and Subject Normalization Loss Function

2.3.4. CBT Refinement After the Training

2.4. Results and Discussion

2.4.1. Connectomic Datasets and Model Hyperparameter Setting

2.4.2. CBT Representativeness Test

2.4.3. CBT Discriminativeness Reproducibility Test

2.4.4. Discovery of Most Discriminative Brain ROIs for Each Disordered Population

2.5. Conclusion

3. 知识补充

3.1. geometric deep learning

4. Reference


1. 省流版

1.1. 心得

(1)含有可视化流程推导

(2)?受试者还能再选少一点吗

2. 论文逐段精读

2.1. Abstract

        ①They mentioned the average connectome, connectional brain template (CBT)

        ②Challenge: existing population FCs are mostly linear based rather than non-linear

        ③They proposed a Deep Graph Normalizer (DGN), which fuses the multi-view brain networks and captures the non-linear mode across subjects

2.2. Introduction

        ①Multi-modal datasets such as Human Connectome Project (HCP) and Connectome Related to Human Disease (CRHD) promote the finding of CBT (healthy)

        ②Briefly introduced DGN

2.3. Proposed Method

         ①Overall framework:

        ②MVBNs are T=\{\mathbf{T}_{1}^{1},\mathbf{T}_{2}^{1},\ldots,\mathbf{T}_{i}^{v},\ldots,\mathbf{T}_{N}^{n_{v}}\}, where \mathbf{T}_i^v means the v-th view of subject i.

        ③The same as the A) in figure, each subject s can be represented as \mathcal{T}_s\in\mathbb{R}^{n_r\times n_r\times n_v} ([number of ROI, number of ROI, number of view])

        ④Cross validation: 5 fold

2.3.1. Tensor Representation of Multi-view Brain Networks

        ①Edge vector \mathbf{e}_{ij}\in\mathbb{R}^{n_{v}\times1} stores the view attributes

        ②The diagonal of the ROI matrix is set to 0

        ③Noded attributes matrix \mathbf{V}^0\in\mathbb{R}^{n_\tau\times d_0}, they define it as identity matrix cuz there is no original node features

2.3.2. Geometric Deep Learning Layers

        ①Message passing operation:

\mathbf{v}_i^l=\frac{1}{|N(i)|+1}\left(\boldsymbol{\Theta}^l.\mathbf{v}_i^{l-1}+\sum_{j\epsilon N(i)} F^l(\mathbf{e}_{ij};\mathbf{W}^l)\mathbf{v}_j^{l-1}+\mathbf{b}^l\right)

F^l(\mathbf{e}_{ij};\mathbf{W}^l)=\mathbf{\Theta}_{ij}

where \mathbf{v}_i^l denotes the embedding/feature of ROI i at the layer l\Theta ^l denotes the learnable parameter at the layer lN(i) represents the neighbours of ROI i\mathbf{b}^l\in\mathbb{R}^{d_l} is of course the bias term and F^l maps \mathbb{R}^{n_v} to \mathbb{R}^{d_{l}\times d_{l-1}}为什么会是这样的映射啊??), \mathbf{W}^{l} denotes the weights.

2.3.3. CBT Construction Layer and Subject Normalization Loss Function

        ①Output obtained: \mathbf{V}^{L} = \begin{bmatrix} \mathbf{v}_{1}^{L},\mathbf{v}_{2}^{L},...,\mathbf{v}_{n_{r}-1}^{L},\mathbf{v}_{n_{r}}^{L} \end{bmatrix}^{T}

        ②They horizontally replicate \mathbf{V}^{L} \in \mathbb{R}^{n_{r}\times d_{L}} to become \mathcal{R}\in\mathbb{R}^{n_{r}\times n_{r}\times d_{L}}

        ③Transposing \mathcal{R}_{xyz} to  \mathcal{R}_{yxz}=\mathcal{R}^T

        ④Calculating the element-wise absolute difference between \mathcal{R} and \mathcal{R}^T. And I guess that, I suppose there is an \mathbf{V}^{L} \in \mathbb{R}^{n_{r}\times d_{L}}:

Then employ the replication operation...It actually becomes (it is \mathcal{R}\in\mathbb{R}^{n_{r}\times n_{r}\times d_{L}}):

Doing a transpose operation and we get  \mathcal{R}^T\in\mathbb{R}^{n_{r}\times n_{r}\times d_{L}}, it seems like...:

....a little bit different between the \mathcal{R}\in\mathbb{R}^{n_{r}\times n_{r}\times d_{L}} above ahh. And finally I suppose that the element-wise absolute difference will be:

Quite a magic~Then sum them up along z-axis:

We get the final CBT \mathrm{C}\in\mathbb{R}^{n_r\times n_r} (I like this color scheme a lot).

        ⑤⭐I suppose they proposed a new loss term which called Subject Normalization Loss (SNL) Optimization. THEY DON'T WANT TO CALCULATE LOSS ON THE SAME SAMPLES ALL THE TIME, so they randomly choose a subset to measure the loss (for regularization):

SNL_s=\sum\limits_{v=1}^{n_v}\sum\limits_{i\in S}\|\mathbf{C}_s-\mathbf{T}_i^v\|_F\times\lambda_v;\min\limits_{\mathbf{W}_1,\mathbf{b}_1...\mathbf{W}_L,\mathbf{b}_L}\frac{1}{|T|}\sum\limits_{s=1}^{|T|}SNL_s

where \lambda_v=\frac{\max\{\mu_j\}_{j=1}^{n_v}}{\mu_v}\mu_v denotes the average brain connectivity weights of view v

2.3.4. CBT Refinement After the Training

        ①They construct their CBT by select the median of all the CBTs

2.4. Results and Discussion

2.4.1. Connectomic Datasets and Model Hyperparameter Setting

(1)ABIDE

        ①Sample: 77, with  41 AD and 36 LMCI

        ②Features of subject: maximum principal curvature, the mean cortical thickness, the mean sulcal depth, and the average curvature

(2)ADNI GO

        ①Sample: 310 with 155 NC and 155 ASD

        ②Features of subject:  cortical surface area, minimum principle area, and 6 cortical morphological brain networks extracted from the 4 aforementioned cortical measures 

(3)Comprehensive settings

        ①Cross validation: 5 fold

        ②ROI: Desikan-Killiany atlas with 35 ROIs

        ③()Brain network: computing the pairwise absolute difference in cortical measurements between pairs of ROIs.

        ④Hyperparameter settings: by grid search

        ⑤Optimizer: Adam

        ⑥Learning rate: 0.0005

        ⑦Random samples in SNL: 10

2.4.2. CBT Representativeness Test

        ①Representativeness comparison between CBTs generated by the proposed model and netNorm:

2.4.3. CBT Discriminativeness Reproducibility Test

        ①Computing the most discriminative k ROIs by calculating the absolute difference between A and B classes

        ②Overlapping discriminative ROIs of different models:

2.4.4. Discovery of Most Discriminative Brain ROIs for Each Disordered Population

        ①The most discriminative ROIs of ASD: left insula cortex, left superior temporal sulcus (STS) and right frontal pole

        ②The most discriminative ROIs of AD: the left temporal pole (TP) and right entorhinal cortex (EC)

2.5. Conclusion

        They aim to design multi-modality, or geo deep learning (?) and topological loss constraints

3. 知识补充

3.1. geometric deep learning

(1)参考学习:几何深度学习(Geometric Deep Learning)技术 - 知乎 (zhihu.com)

(2)参考论文:Geometric Deep Learning: Going beyond Euclidean data | IEEE Journals & Magazine | IEEE Xplore

4. Reference

Gurbuz, M. B. & Rekik, I. (2020) 'Deep Graph Normalizer: A Geometric Deep Learning Approach for Estimating Connectional Brain Templates', MICCAI.

  • 18
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值