论文网址:BrainTGL: A dynamic graph representation learning model for brain network analysis - ScienceDirect
英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用
目录
2.3.1. The static brain network analysis methods
2.3.2. The dynamic brain network analysis methods
2.4.2. Dynamic brain network series construction
2.4.4. Attention based graph pooling
2.4.5. Dual temporal graph learning
2.4.7. Variation of braintgl for unsupervised clustering
2.4.8. Theoretical contribution
2.5.1. Datasets and environment
2.5.2. Comparison with the prior works on the brain network classification
2.5.5. Disease subtype clustering analysis
2.6. Limitations and future works
1. 心得
(1)不是很难的论文
(2)实验好多啊~~哎
2. 论文逐段精读
2.1. Abstract
①Existing works do not consider the relationships between spatial and temporal characteristics in brain
②They proposed a temporal graph representation learning framework for brain networks (BrainTGL)
2.2. Introduction
①They proposed a dual temporal graph learning (DTGL) module
2.3. Related work
①Methods of brain network analysis:
(作者并没有在图里多说而是在图下面附上了很长的解释...我倒是觉得这样有一些不妥,让图本身比较费解。作者认为(a)是最传统的静止FC分析,(b)也是相对来说静止的,我猜测就是去BOLD上滑个窗,作者觉得这没有考虑到“动态空间依赖性”,而(c)的两阶段学习则是让时空分离了,(d)是作者的)
2.3.1. The static brain network analysis methods
①Listing some static methods and elaborating that they cannot learn advanced features
2.3.2. The dynamic brain network analysis methods
②Examples some dynamic models and explains that they cannot simulate two types of network at the same time
2.4. Method
2.4.1. Problem statement
①Time series data: , where denotes the number of ROI
②Label: and
③Dataset:
④Graph:
⑤Atlas: CC200
⑥Adjacency matrix: calculated by Pearson correlation coefficient (PCC)
⑦Mapping function:
⑧Overall framework:
2.4.2. Dynamic brain network series construction
①Brain graph construction:
2.4.3. Data augmentation
①Cropping each BOLD signals to the same length, and then divide them:
2.4.4. Attention based graph pooling
①Then coarsen the original graph to with super nodes and adjacency matrix
②They define learnable parameter :
where denotes the importance score of each node
③Schematic:
④Values of super edge in adjacency matrix:
2.4.5. Dual temporal graph learning
①The sketch map:
②Signal representation learning (S-RL) module:
where denotes the convolutional in the -th layer, denotes the kernel size and denotes the element in BOLD signal
③Features of supernode: max pooling by original nodes
④Proposed a temporal graph representation learning (TG-RL) module
⑤They designed a multi-skip scheme to capture long skip and short skip information:
with all the input, hidden state and cell memory are in graph structure, are modulated input, the hidden state and cell memory, respectively
⑥Output in the graph convolution:
⑦The final embedding:
2.4.6. Ensemble
①Hyperparameter optimization: multi-time window ensemble strategy
②The ensembling process:
2.4.7. Variation of braintgl for unsupervised clustering
①Framework of unsupervised BrainTGL (BrainTGL-C):
they initialize all the pseudo labels in every iterations
2.4.8. Theoretical contribution
(1)Feature learning for spatio-temporal data
(2)Graph structure learning for the graph data with complex structure
2.5. Experiments
2.5.1. Datasets and environment
(1)Datasets
①ABIDE: 871 subjects with 403 ASD and 468 HC filtered by preprocessing. Further eliminating BOLD signal which is not in [176,250] and left 512 subjects
②HCP: excluding which frame less than 1200, left 1091 subjects with 498 female and 593 male (22 regions of cortical surface)
③NMU: NMU MDD with 246 HC and 181 MDD, NMU BD with 246 HC and 146 BD, applying AAL 90
④Hardware environment:
2.5.2. Comparison with the prior works on the brain network classification
①Cross validation: 5 fold
②Comparison table on HCP and ABIDE:
③Comparison table on NMU:
④ROC:
2.5.3. Ablation study
①Module ablation on ABIDE:
②Module ablation on HCP:
2.5.4. Discussion
(1)The impact of attention graph pooling
①Result of t-SNE:
(2)The impact of the supernode number
①Number of supernodes:
(3)Comparison of pooling methods
①Different of other pooling methods on ABIDE:
(4)The impact of different skip lengths in TG-RL
①Skip ablation:
(5)The model complexity
①FLOPs and traning time comparison:
2.5.5. Disease subtype clustering analysis
①Subtype:
②Subtype classification on MDD:
③Subtype on BD:
2.6. Limitations and future works
①作者觉得在粗化的时候信息丢失了。但是其实这也是白说这肯定会丢失嘛。要么就保留一下原信息但是占比调低或者整个什么同时精化的来融合balabala...
②作者觉得没有捕获到长程依赖。这不是研究者的问题吧???数据集本来就只有那么几分钟的BOLD信号,谁能长程捕获啊,做纵向数据算了
③觉得可以有更多HPO方法,嗯
2.7. Conclusion
~
3. Reference
Liu, L. et al. (2023) 'BrainTGL: A dynamic graph representation learning model for brain network analysis', Computers in Biology and Medicine, 153. doi: Redirecting