[论文精读]A residual graph convolutional network with spatio-temporal features for autism classification

论文全名:A residual graph convolutional network with spatio-temporal features for autism classification from fMRI brain images

论文原文:A residual graph convolutional network with spatio-temporal features for autism classification from fMRI brain images - ScienceDirect

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用!

目录

1. 省流版

1.1. 心得

1.2. 论文总结图

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Related works

2.4. Learning spatio-temporal features

2.4.1. Residual convolutional neural network for spatial features

2.4.2. Attention mechanism for temporal features

2.5. Learning dynamic functional connectivity

2.5.1. Graph transformation

2.5.2. Graph neural network to learn dynamic connectivity

2.6. Experiments

2.6.1. Data preprocessing and implementation

2.6.2. Comparison of classification accuracy

2.6.3. Analysis of spatio-temporal features

2.6.4. Comparison of the pre-trained and proposed models

2.6.5. Analysis of difference between two groups

2.7. Conclusions

3. 知识补充

3.1. Fuzzy parameter tuning

3.2. meta-heuristic optimization

4. Reference List


1. 省流版

1.1. 心得

(1)喔喔喔喔噢开篇96%的分类精度震碎我时代发展地真快啊hhhh

(2)⭐开篇说用STS和视觉皮层的功能连接,那意思其他的不用吗?

(3)⭐又是这种分成两块的,先残差注意力再图卷积,倒也不是不可以。STA-4DCNN就是先U-Net后注意力,那个图还画得蛮漂亮的,深得我心。

(4)完全改变了传统的图卷积!!?从注意力里面得到的节点构建图节点??!那这样可解释性咋样啊

(5)⭐ABIDE有区分ASD亚型吗?怕是没有吧?

(6)作者说自己创新点是同时考虑时间和空间特征这是我不认可的,我都读了好多篇这样的了,还是早些年生的。可能凑字数吧。其实目前来看(3)(4)上的创新还是挺够的。 

(7)大哥你为什么不给你模型取个名字啊??!

(8)存疑-作者说卷积和递归的网络可以学到大脑的结构和功能特征,但没说拿什么卷。你不会拿.nii卷吧?对它好像就是直接把.nii转换为图像卷

(9)这个分辨率我超它怎么回事啊???24*24*24???

        ①我做了个实验,首先我自己在pixart画了个24*24的脑子玩

事实证明我画的有点丑

        ②我写了段代码去把我截屏的文件更改为24*24的

from PIL import Image

# 打开图片
img = Image.open('brain.jpg')

# 调整图片大小
resized_img = img.resize((24, 24))

# 转换图片模式  
resized_img = resized_img.convert('RGB')

# 保存图片
resized_img.save('resized_brain.jpg')

这是原始图片:

这是resize的:

我等比例放大长这样:

(10)2.6.3.里面缺少了病理性分析

(11)没什么limitation和未来展望

1.2. 论文总结图

2. 论文逐段精读

2.1. Abstract

        ①⭐There is no specific etiology of autism spectrum disorder (ASD) right now. However, abnormalities in the superior temporal sulcus (STS) connected with visual cortex might be a relevant factor of ASD. Thus, the functional connectivities between STS and visual cortex are used for diagnosis.

        ②For their model, they firstly extract temporal and spatial features from 4D fMRI images by residual attention network. Then construct graph convolutional network, ⭐which contains 39 nodes extracted in res-attention network.

        ③The choose 800 subjects in ABIDE and then adopt 10-fold cross validation, finally achieve 11.37% classification accuracy improvement.

sulcus  n. (anatomy) any of the narrow grooves in an organ or tissue especially those that mark the convolutions on the surface of the brain

2.2. Introduction

        ①⭐Autism includes diseases of Asperger’s syndrome, Rett syndrome etc... It may presents differently in action or in brain network. (疾病有很多种亚型诶,就像糖尿病有一型二型,甲亢也有桥本氏甲状腺炎和graves等等,肺炎也有很多很多种。emm这样说起来确实很麻烦诶,人体太奇妙了)

        ②Atypical head movement or micro-eyes movement characterize ASD to some extent. Furthermore, fMRI may provide more vital information of ASD. Most of the times the connections between default mode network (DMN) and visual brain region of ASD are weaker than HC/NC.

        ③They overcome a limitation when extracting functional connectivity in STS and visual cortex by other multivariate time-series method

        ④They visualize the attention weights in hidden layers to present relevance between ASD and brain regions.

2.3. Related works

        ①Relevant deep learning models in ASD classification are:

(可以见得要么是普通卷积要么就是集成要么是注意力类的,作者似乎认为自己是更偏向于这方面的。但是,这些真的是SOTA吗?我看的stagin呢?还是顶会的文。记录了时间和被试数量是值得被肯定的,是个很好的习惯)

        ②Ensemble model is combined by DenseNet, ResNet, Xception and Inception V3.

        ③⭐Spatial feature always extracted by convolution.

        ④Convolutional-recurrent neural networks are able to learn the structure and function of brain at the same time(为什么啊?卷积就能学到结构和功能?你也没说你拿啥卷啊?

        ⑤They adopt 2 biomakers(你也没说是啥啊?).

2.4. Learning spatio-temporal features

        ①The broad framework of "their model":

Obviously, the process is pre-processing->extract spatial feature by a normal convolution->extract temporal feature by attention mechanism->Softmax?????(图卷积呢???!

        ②They not only aim to reduce the loss of connections between STS and visual cortex(它们为啥有连接损失啊,这个和学习的损失一样吗?不一样吧?), but also choose useful features from functional connectivity

        ③In pre-processing stage, they separate 3D fMRI images from original 4D file. Each block contains t second and k column. (我有点不能理解他之后的滑窗...感觉滑不出重叠的...部分

        ④After convolution of 3D blocks, they get vectors x_1,x_2,...,x_t to Bi-LSTM.

        ⑤The output of Bi-LSTM is vectors h_1,h_2,...,h_t

2.4.1. Residual convolutional neural network for spatial features

        ①They define convolutional filters and residual network:

\begin{aligned}&H\left(x\right)=F(x)+x\\{}\\&F(x)=\sum_{a=0}^{m-1}\sum_{b=0}^{m-1}w_{ab}^{l}x_{(i+a)(j+b)}^{l-1}\end{aligned}

the gradient will be F^{\prime}(x)+1, which avoid vanishing gradient problem and reduce the loss of functional connectivity????(额额我现在还没有学那么深,残差网络就能减少吗...这个残差网络很普通诶就是最简单最直接的那种)

        ②The visualization of filter:

2.4.2. Attention mechanism for temporal features

        ①Attention score is calculated by(f\left ( \right )是啥啊?原始的attention公式咩?):

score(h_i,h_j){=f(W_qh_i,W_kh_j)}

        ②LSTM network a_{ij} with attention weights:

a_{ij}=softma\mathrm{~}x_j\left(h_{ij}\right)=\frac{\exp[score(h_i,h_j)\cdot W_vh_i]}{\sum_{n=1}^Nexp[score(h_i,h_n)\cdot W_vh_i]}

        ③The output R=\{r_{hidden}\quad_{\#i},i=1,\ldots N\}:

r_i=\sum_{n=1}^Na_{in}\cdotp h_n

        ④A visualization of Bi-LSTM:

2.5. Learning dynamic functional connectivity

        ①The previous model is pre-training model:

2.5.1. Graph transformation

        ①The connectivity matrix is generated after the first residual block:

they only remain positive connections.(但你完全没说为什么是39啊???

dissect  vt.解剖(人或动植物);剖析;仔细研究;详细评论;把…分成小块

2.5.2. Graph neural network to learn dynamic connectivity

        ①They deploy graph convolutional neural network (GCN) in a graph structure:

 H=\varphi(A,X){=\sigma\left(AXW\right)}

where X denotes node feature matrix (不是我还是想问fMRI的节点哪里来的特征啊?) and A denotes adjacency matrix;

\sigma \left ( \right ) represents the two matrices learn with weight matrix W.

        ②The construction of graph shows below:

        ③They aggregate the connectivity information from:

h_v^k=\sigma\left(W^{k-1}\cdot\sum_{u\in N(v)}a_{uv}\frac{h_u^{k-1}}{|N(v)|}+B_kh_v^{k-1}\right)

which is the same as Graph attention network (GAT)

        ④Their readout function is summation:

\mathrm{Z}_\mathrm{G}=\sigma\left(\sum_{i\in V}MLP\left(h_i^k\right)\right)

        ⑤The pseudo-code of construct connectivity:

2.6. Experiments

        ①Ubuntu, Python 3.x, and TensorFlow 2.3!!!

        ②Python libraries: Scikit-learn, Nilearn, and Networkx etc.

2.6.1. Data preprocessing and implementation

        ①Dataset: ABIDE I

        ②Sample: 800 patients with 389 ASD and 411 NC

        ③Training set: 720, 90% of original sample

        ④Validation set: 80, 10% of original sample

        ⑤Optimizer: Adam

        ⑥⭐They reshape every 3D image to 24*24*24 resolution, then the 4D image is 10 × 24 × 24 × 246

        ⑦The table of the whole structure(很有用的东西没错但是你为什么不能直接放代码呢?):

        ⑧Hyper-parameters(放代码会死吗大哥我还要手动复现哪):

2.6.2. Comparison of classification accuracy

        ①Comparison of 10-fold validation in different models:

(0.9627!欢呼吧!喝彩吧!(收收味))

        ②Hyper-parameter optimizing of the number of layers:

so they finally adopt five GCN blocks with 3 layers

2.6.3. Analysis of spatio-temporal features

        ①The compare the activated regions of their model and traditional vanilla CNN and it proves the effectiveness of their model(作者只是说vanilla的分布均匀而他们的分布在大脑中心,但是没说大脑中心就是ASD患病区域啊?你得病理性证明一下吧?):

        ②The weight assignments example in the 8-th embedding vector(这是啥啊):

2.6.4. Comparison of the pre-trained and proposed models

        ①Illustration of confusion matrix, precision, recall, F1-score and misclassification graph

        ②They want to improve the classification performance by further research loss function

2.6.5. Analysis of difference between two groups

        ①Subjects distribution in validation set: 42 ASD and 38 NC

        ②"when analyzing other regions with a connection difference of more than ±6%, there are notable disparities between the two groups in the connectivity over the different regions connected to the lateral occidental complex (LOC) or default mode network (DMN)"(我没太懂这个

striate  adj.条纹;有条纹的

2.7. Conclusions

        They put forward an excellent model. Moreover, they think fuzzy parameter tuning and meta-heuristic optimization could be used in ther parameter optimizing. Also, this model can be used in other diseases.

3. 知识补充

3.1. Fuzzy parameter tuning

"Fuzzy parameter tuning" 指的是调整模糊逻辑(fuzzy logic)参数的过程。模糊逻辑是一种用于处理不确定性和模糊性的逻辑,它比传统的二值逻辑(0或1)更加灵活,可以更好地处理模糊的、连续的或不确定的数据。

在模糊逻辑中,参数通常包括隶属度函数(membership function)、规则(rule)和去模糊化方法(defuzzification method)等。这些参数需要根据实际应用场景进行调整,以获得最佳的模糊逻辑性能。

在进行模糊参数调整时,通常需要采用一些优化算法,如遗传算法(genetic algorithm)、粒子群优化算法(particle swarm optimization algorithm)等,以寻找最优的参数组合。这些算法可以自动地搜索和优化参数,以使模糊逻辑系统达到最佳的性能。

总之,"fuzzy parameter tuning" 是指通过调整模糊逻辑的参数,以优化系统的性能的过程。这个过程通常需要借助一些优化算法来实现。

3.2. meta-heuristic optimization

元启发优化(Meta-heuristic optimization)是一种优化技术,它使用启发式方法来寻找问题的解决方案。这些启发式方法通常基于自然或生物系统的行为,如遗传、进化、群体行为等。

元启发优化方法包括许多不同的算法,如遗传算法、粒子群优化、蚁群优化、模拟退火等。这些算法都使用启发式搜索策略来找到问题的最优解,而不需要精确地知道目标函数或问题的梯度信息。

元启发优化方法通常适用于解决复杂的、非线性的、多峰的优化问题。它们可以处理大规模的问题,并且可以在较短的时间内找到高质量的解决方案。

总之,元启发优化是一种基于启发式方法的优化技术,可以用于解决各种复杂的优化问题。

4. Reference List

Park K. & Cho S. (2023) 'A residual graph convolutional network with spatio-temporal features for autism classification from fMRI brain images', Applied Soft Computing, 142. doi: Redirecting

  • 21
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值