[论文精读]A systematic comparison study on hyperparameter optimisation of graph neural networks for -

论文全名:A systematic comparison study on hyperparameter optimisation of graph neural networks for molecular property prediction

论文网址:[PDF] A systematic comparison study on hyperparameter optimisation of graph neural networks for molecular property prediction | Semantic Scholar

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 省流版

1.1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Related work

2.3.1. Random search

2.3.2. TPE

2.3.3. CMA-ES

2.4. Methodology comparison

2.4.1. Common features

2.4.2. Specific features

2.5. Experimental investigation

2.5.1. Experimental settings

2.5.2. Computational cost as a primary consideration

2.5.3. Performance as primary consideration

2.6. Conclusion and future work

3. Reference


1. 省流版

1.1. 心得

(1)这些哥们儿怎么又来了一篇

2. 论文逐段精读

2.1. Abstract

        ①Molecular datasets are far smaller than other datasets such as image datasets

        ②They compared TPE and CMA-ES with Random Search (RS)

2.2. Introduction

        ①简单介绍了一下GNN的应用领域,模型发展,然后说超参数选择会影响结果,盲目选择非常耗时

        ②TPE and CMA-ES are two SOTA HPO algorithms (the authors think so)

impediment  n. 妨碍,障碍;口吃,结巴

2.3. Related work

2.3.1. Random search

        ①Algorithm of RS

and Bergstra et al. thought RS is the nature baseline of HPO

        ②RS limits on broad search space and limited computational cost

2.3.2. TPE

        ①Sequential model-based global optimisation (SMBO) algorithms could solve the cost problem

        ②Tree-structured Parzen Estimator (TPE) based on SMBO

        ③详细地介绍了一下TPE和EI,但是这里不多做解释,有需要的小伙伴可以参考这篇论文(虽然我更建议直接去阅读原文啦)

        ④Algorithm of TPE

2.3.3. CMA-ES

        ①Algorithm of CAM-ES

which is a derivative-free evolutionary algorithm and can solve black-box optimization problem

2.4. Methodology comparison

2.4.1. Common features

        ①Randomness

        ②Derivative-free

        ③Termination condition

2.4.2. Specific features

        ①Uniform Distribution vs Gaussian Mixture Model vs Multivariate Normal Distribution

        ②Model-based and Model free

        ③Bayesian Optimisation vs Evolutionary Strategy

2.5. Experimental investigation

2.5.1. Experimental settings

        ①Dataset: ESOL with 1128 samples, FreeSolv with 642 samples and Lipophilicity with 4200 samples

        ②GNN chosen: GCN

        ③Hyperparameter selected: batch size s_b, learning rate s_f, size of fully connected layer s_f, size of GCN layer

        ④Hyperparameter of HPO: default

        ⑤Training/validation/testing set: 80%/10%/10%

        ⑥Evaluation metric: root mean square error (RMSE)

        ⑦Epoch: 30

        ⑧t-test applied

2.5.2. Computational cost as a primary consideration

        ①Performance on ESOL:

        ②Performance on FreeSolv:

        ③Performance on Lipophicity:

        ④t-test on 3 datasets:

        ⑤Time limited test:

        ⑥t-test on time limited test:

2.5.3. Performance as primary consideration

        ①Experiments on Repeated HPO Runs on ESOL

        ②Correspond t-test

        ③Experiments on A Larger Search Space

        ②t test:

2.6. Conclusion and future work

        ①Under these tests, TPE is the best HPO algorithm and RS is the simplest method

        ②They want to further research the meta hyperparameter

        ③They hope to cross subjects

3. Reference

Yuan, Y., Wang, W. & Pangm, W. (2021) 'A systematic comparison study on hyperparameter optimisation of graph neural networks for molecular property prediction', Annual Conference on Genetic and Evolutionary Computation. doi: 10.1145/3449639.3459370

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值