Li Fei-fei写给她学生的一封信,如何做好研究以及写好PAPER

转载 2013年07月18日 18:21:35

在微博上看到的,读完还是有些收获的,首先是端正做research的态度。

我是从这里看到的:http://www.vjianke.com/ZM0BC.clip

 

---------------------------------------以下是原文---------------------------------------------

 

De-mystifying Good Research and Good Papers 

By Fei-Fei Li, 2009.03.01 
 


Please remember this:  
1000+ computer vision papers get published every year! 
Only 5-10 are worth reading and remembering! 
 
Since many of you are writing your papers now, I thought that I'd share these thoughts with you. I probably have said all these at various points during our group and individual meetings. But as I continue my AC reviews these days (that's 70 papers and 200+ reviews -- between me and my AC partner), these following points just keep coming up. Not enough people conduct first class research. And not enough people write good papers.  

 

- Every research project and every paper should be conducted and written with one singular purpose: *to genuinely advance the field of computer vision*. So when you conceptualize and carry out your work, you need to be constantly asking yourself this question in the most critical way you could – “Would my work define or reshape xxx (problem, field, technique) in the future?” This means publishing papers is NOT about "this has not been published or written before, let me do it", nor is it about “let me find an arcane little problem that can get me an easy poster”. It's about "if I do this, I could offer a better solution to this important problem," or “if I do this, I could add a genuinely new and important piece of knowledge to the field.” You should always conduct research with the goal that it could be directly used by many people (or industry). In other words, your research topic should have many ‘customers’, and your solution would be the one they want to use. 

 

- A good research project is not about the past (i.e. obtaining a higher performance than the previous N papers). It's about the future (i.e. inspiring N future papers to follow and cite you, N->\inf).  

 

- A CVPR'09 submission with a Caltech101 performance of 95% received 444 (3 weakly rejects) this year, and will be rejected. This is by far the highest performance I've seen for Caltech101. So why is this paper rejected? Because it doesn't teach us anything, and no one will likely be using it for anything. It uses a known technique (at least for many people already) with super tweaked parameters custom-made for the dataset that is no longer a good reflection of real-world image data. It uses a BoW representation without object level understanding. All reviewers (from very different angles) asked the same question "what do we learn from your method?" And the only sensible answer I could come up with is that Caltech101 is no longer a good dataset.  

 

- Einstein used to say: everything should be made as simple as possible, but not simpler. Your method/algorithm should be the most simple, coherent and principled one you could think of for solving this problem. Computer vision research, like many other areas of engineering and science research, is about problems, not equations. No one appreciates a complicated graphical model with super fancy inference techniques that essentially achieves the same result as a simple SVM -- unless it offers deeper understanding of your data that no other simpler methods could offer. A method in which you have to manually tune many parameters is not considered principled or coherent.  

 

 - This might sound corny, but it is true. You're PhD students in one of the best universities in the world. This means you embody the highest level of intellectualism of humanity today. This means you are NOT a technician and you are NOT a coding monkey. When you write your paper, you communicate  and . That's what a paper is about. This is how you should approach your writing. You need to feel proud of your paper not just for the day or week it is finished, but many for many years to come. 

 

 - Set a high goal for yourself – the truth is, you can achieve it as long as you put your heart in it! When you think of your paper, ask yourself this question:  Is this going to be among the 10 papers of 2009 that people will remember in computer vision? If not, why not? The truth is only 10+/-epsilon gets remembered every year. Most of the papers are just meaningless publication games. A long string of mediocre papers on your resume can at best get you a Google software engineer job (if at all – 2009.03 update: no, Google doesn’t hire PhD for this anymore). A couple of seminal papers can get you a faculty job in a top university. This is the truth that most graduate students don't know, or don't have a chance to know.  

 

- Review process is highly random. But there is one golden rule that withstands the test of time and randomness -- badly written papers get bad reviews. Period. It doesn't matter if the idea is good, result is good, citations are good. Not at all. Writing is critical -- and this is ironic because engineers are the worst trained writers among all disciplines in a university. You need to discipline yourself: leave time for writing, think deeply about writing, and write it over and over again till it's as polished as you can think of.  

 

 - Last but not the least, please remember this rule: important problem (inspiring idea) + solid and novel theory + convincing and analytical experiments + good writing = seminal research + excellent paper. If any of these ingredients is weak, your paper, hence reviewer scores, would suffer. 


相关文章推荐

局部特征(3)——SURF特征总结

局部特征系列: 局部特征(1)——入门篇 局部特征(2)——Harris角点 局部特征(3)——SURF特征总结 局部特征(4)——SIFT和SURF的比较 局部特征(5)——如何利...

SIFT,SURF,ORB,FAST 特征提取算法比较

SIFT,SURF,ORB,FAST 特征提取算法比较主要的特征检测方法有以下几种,在一般的图像处理库中(如opencv, VLFeat, Boofcv等)都会实现。 FAST ,Machine Le...

SIFT特征提取分析

SIFT(Scale-invariant feature transform)是一种检测局部特征的算法,该算法通过求一幅图中的特征点(interest points,or corner points)...

特征提取方法 SIFT,PCA-SIFT,GLOH,SURF

在前面的blog中,我们已经讲了SIFT的原理,这里我们再详细讲解SIFT的变体:PCA-SIFT和GLOH。 – Scale invariant feature transform (SIFT...

应用Sift算子的模式识别方法 内核篇

总的来说,模式识别方法分两步:train&predict. 经过谭敏师姐细心讲解和细读 siftdemov4 code,将这两部的理解解释如下: (以下提到的feature都是指sift feat...

SIFT/SURF算法的深入剖析——谈SIFT的精妙与不足

SURF算法是SIFT算法的加速版,opencv的SURF算法在适中的条件下完成两幅图像中物体的匹配基本实现了实时处理,其快速的基础实际上只有一个——积分图像haar求导,对于它们其他方面的不同可以参...
  • cy513
  • cy513
  • 2009-08-05 23:43
  • 72239

谈谈SIFT、PCA-SIFT、SURF及我的一点思考

SIFT(Scale-invariant feature transform), Lowe, 2004PCA-SIFT(Principle Component Analysis), Y.ke, 200...

局部特征(6)——局部特征描述汇总

局部特征系列: 局部特征(1)——入门篇 局部特征(2)——Harris角点 局部特征(3)——SURF特征总结 局部特征(4)——SIFT和SURF的比较 局...

局部特征(2)——Harris角点

局部特征系列: 局部特征(1)——入门篇 局部特征(2)——Harris角点 局部特征(3)——SURF特征总结 局部特征(4)——SIFT和SURF的比较 局部特征(5)——如何利...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)