NIPS的评分标准

Reviewers give a score of between 1 and 10 for each paper. The program committee will interpret the numerical score in the following way:

10: Top 5% of accepted NIPS papers, a seminal paper for the ages.

I will consider not reviewing for NIPS again if this is rejected.

9: Top 15% of accepted NIPS papers, an excellent paper, a strong accept.

I will fight for acceptance

8: Top 50% of accepted NIPS papers, a very good paper, a clear accept.

I vote and argue for acceptance

7: Good paper, accept.

I vote for acceptance, although would not be upset if it were rejected.

6: Marginally above the acceptance threshold.

I tend to vote for accepting it, but leaving it out of the program would be no great loss.

5: Marginally below the acceptance threshold.

I tend to vote for rejecting it, but having it in the program would not be that bad.

4: An OK paper, but not good enough. A rejection.

I vote for rejecting it, although would not be upset if it were accepted.

3: A clear rejection.

I vote and argue for rejection.

2: A strong rejection. I'm surprised it was submitted to this conference.

I will fight for rejection

1: Trivial or wrong or known. I'm surprised anybody wrote such a paper.

I will consider not reviewing for NIPS again if this is accepted

Reviewers should NOT assume that they have received an unbiased sample of papers, nor should they adjust their scores to achieve an artificial balance of high and low scores. Scores should reflect absolute judgments of the contributions made by each paper.

Confidence Scores

Reviewers also give a confidence score between 1 and 5 for each paper. The program committee will interpret the numerical score in the following way:

5:

The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature.

4:

The reviewer is confident but not absolutely certain that the evaluation is correct. It is unlikely but conceivable that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature.

3:

The reviewer is fairly confident that the evaluation is correct. It is possible that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature. Mathematics and other details were not carefully checked.

2:

The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper.

1:

The reviewer's evaluation is an educated guess. Either the paper is not in the reviewer's area, or it was extremely difficult to understand.

Qualitative Evaluation

All NIPS papers should be good scientific papers, regardless of their specific area. We judge whether a paper is good using 4 criteria; a reviewer should comment on all of these, if possible:

Quality

Is the paper technically sound? Are claims well-supported by theoretical analysis or experimental results? Is this a complete piece of work, or merely a position paper? Are the authors careful (and honest) about evaluating both the strengths and weaknesses of the work?

Clarity

Is the paper clearly written? Is it well-organized? (If not, feel free to make suggestions to improve the manuscript.) Does it adequately inform the reader? (A superbly written paper provides enough information for the expert reader to reproduce its results.)

Originality

Are the problems or approaches new? Is this a novel combination of familiar techniques? Is it clear how this work differs from previous contributions? Is related work adequately referenced? We recommend that you check the proceedings of recent NIPS conferences to make sure that each paper is significantly different from papers in previous proceedings. Abstracts and links to many of the previous NIPS papers are available from http://books.nips.cc

Significance

Are the results important? Are other people (practitioners or researchers) likely to use these ideas or build on them? Does the paper address a difficult problem in a better way than previous research? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions on existing data, or a unique theoretical or pragmatic approach?

 

zz from http://nips.cc/PaperInformation/ReviewerInstructions

转载于:https://www.cnblogs.com/ysjxw/archive/2009/12/11/1622170.html

03-08
### NIPS会议资料与论文 NIPS(现称为NeurIPS)是一个专注于机器学习和计算神经科学的国际顶级学术会议。该会议每年都会发布一系列高质量的研究成果,涵盖了理论、算法和技术应用等多个领域。 #### 2020年NIPS会议安排 为了获取详细的会议日程和其他相关信息,可以通过官方网址访问2020年的会议安排页面[^2]。此页面提供了所有口头报告的具体时间表以及其他重要活动的时间节点。 #### 多模态学习综述 针对多模态学习这一热门话题,在即将举行的2024年NeurIPS会议上将会有一系列相关论文发表并进行讨论[^1]。这些研究工作通常涉及图像、音频、文本等多种形式的数据融合处理技术,旨在提高模型的理解能力和表达能力。 #### 字嵌入及其扩展——结构化指数族嵌入(S-EFE) 一项特别值得注意的工作是关于字嵌入方法的发展。传统上,字嵌入被广泛应用于自然语言处理任务;然而,研究人员进一步探索了其在更广泛数据类型中的潜力。具体来说,S-EFE不仅能够捕捉到不同群体间词汇使用的差异性特征,而且还在多个实际案例中证明了自身的优越表现,尤其是在预测新样本方面超过了原有的EFE模型[^3]。 #### 残差网络特性探究 除了上述主题外,还有学者深入剖析了残差网络(ResNet)的独特性质。实验表明,即使改变某些组件的位置也不会显著影响整体性能,这暗示着可能存在较短的有效路径主导着信息传递过程。基于这项发现,未来或许可以考虑采用更加灵活的设计思路来构建高效的深层架构,例如通过集成大量小型多样化子网实现类似功能[^4]。 ```python import requests from bs4 import BeautifulSoup def fetch_nips_papers(year): url = f"https://papers.nips.cc/book/advances-in-neural-information-processing-systems-{year}" response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') papers = [] for link in soup.find_all('a'): title = link.string href = link.get('href') if '/paper/' in str(href): papers.append((title, f'https://papers.nips.cc{href}')) return papers[:5] # 获取最近几年的部分NIPS论文链接作为例子 recent_papers = fetch_nips_papers(2020) for paper_title, paper_link in recent_papers: print(f"- [{paper_title}]({paper_link})") ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值