Paper-----文献引用格式

本文介绍学术论文中参考文献的标准著录格式,并提供多个不同类型的参考文献实例,包括专著、期刊文章、会议论文等,帮助读者正确引用文献。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

References

1、国外格式
[1] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986.
[2] T. Cover  P. Hart, "Nearest neighbor pattern classification," Journal IEEE Transactions on Information Theory archive Volume 13 Issue 1, January 1967

2、国内格式
[1] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors.[J]. 1986, 323(6088):399-421.
[2] Cover T M, Hart P E. Nearest neighbor pattern classification. IEEE Trans Inf Theory IT-13(1):21-27[J]. IEEE Transactions on Information Theory, 1967, 13(1):21-27.
[3] Daral N. Histograms of Oriented Gradients for Human Detection[J]. Proc. of CVPR, 2005, 2005.
[3.1] Histograms of Oriented Gradients for Human Detection. Dalai,N,B.Triggs. Computer Vision and Pattern Recognition, 2005.CVPR 2005.IEEE Computer Society Conference on . 2005
[4] Kazemi V, Sullivan J. One Millisecond Face Alignment with an Ensemble of Regression Trees[C] Computer Vision and Pattern Recognition. IEEE, 2014:1867-1874.

例子:《ImageNet Classification with Deep Convolutional  Neural Networks》

Alex Krizhevsky University of Toronto      Ilya Sutskever University of Toronto       Geoffrey E. Hinton University of Toronto

REFERENCES
[1] R.M. Bell and Y. Koren. Lessons from the netflix prize challenge. ACM SIGKDD Explorations Newsletter,
9(2):75–79, 2007.
[2] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. www.imagenet.org/challenges.
2010.
[3] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
[4] D. Cire¸san, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.
Arxiv preprint arXiv:1202.2745, 2012.
[5] D.C. Cire¸san, U. Meier, J. Masci, L.M. Gambardella, and J. Schmidhuber. High-performance neural
networks for visual object classification. Arxiv preprint arXiv:1102.0183, 2011.
[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR09, 2009.
[7] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ILSVRC-2012, 2012. URL
http://www.image-net.org/challenges/LSVRC/2012/.
[8] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An
incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding,
106(1):59–70, 2007.
[9] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California
Institute of Technology, 2007. URL http://authors.library.caltech.edu/7694.
[10] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks
by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
[11] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun. What is the best multi-stage architecture for
object recognition? In International Conference on Computer Vision, pages 2146–2153. IEEE, 2009.
[12] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Department of
Computer Science, University of Toronto, 2009.
[13] A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010.
[14] A. Krizhevsky and G.E. Hinton. Using very deep autoencoders for content-based image retrieval. In
ESANN, 2011.
[15] Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al. Handwritten
digit recognition with a back-propagation network. In Advances in neural information processing
systems, 1990.
[16] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to
pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the
2004 IEEE Computer Society Conference on, volume 2, pages II–97. IEEE, 2004.
[17] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. In
Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on, pages 253–256.
IEEE, 2010.
[18] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised
learning of hierarchical representations. In Proceedings of the 26th Annual International Conference
on Machine Learning, pages 609–616. ACM, 2009.
[19] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric Learning for Large Scale Image Classifi-
cation: Generalizing to New Classes at Near-Zero Cost. In ECCV - European Conference on Computer
Vision, Florence, Italy, October 2012.
[20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th
International Conference on Machine Learning, 2010.
[21] N. Pinto, D.D. Cox, and J.J. DiCarlo. Why is real-world visual object recognition hard? PLoS computational
biology, 4(1):e27, 2008.
[22] N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox. A high-throughput screening approach to discovering
good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579,
2009.
[23] B.C. Russell, A. Torralba, K.P. Murphy, and W.T. Freeman. Labelme: a database and web-based tool for
image annotation. International journal of computer vision, 77(1):157–173, 2008.
[24] J. Sánchez and F. Perronnin. High-dimensional signature compression for large-scale image classification.
In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1665–1672. IEEE,
2011.
[25] P.Y. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to
visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis
and Recognition, volume 2, pages 958–962, 2003.
[26] S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, W. Denk, and H.S. Seung. Convolutional
networks can

方便地导出引用文献:

百度学术中查找论文,点击引用即可复制引用

附:学术论文参考文献的著录格式

1.专著: [序号]作者.书名[M].版本(第1版不著录).出版地:出版者,出版年.起止页码.

2.期刊: [序号]作者.题名[J].刊名,年,卷(期):起止页码.

3.会议论文集(或汇编): [序号]作者.题名[A].编者.论文集名[C].出版地:出版者,出版年.起止页码.

4.学位论文: [序号]作者. 题名[D]. 学位授予地址:学位授予单位,年份.

5.专利: [序号]专利申请者. 专利题名[P].专利国别(或地区):专利号, 出版日期.

6.科技报告: [序号]著者. 报告题名[R].编号,出版地:出版者,出版年.起止页码.

7.标准: [序号] 标准编号,标准名称[S].颁布日期.

8.报纸文章 : [序号] 作者. 题名[N]. 报纸名,年-月-日(版次).

9.电子文献: [序号] 主要责任者.电子文献题名[电子文献及载体类型标识].电子文献的出处或可获得地址,发表或更新日期/引用日期(任选).

10.各种未定义类型的文献: [序号]主要责任者.文献题名[Z]. 出版地:出版者,出版年.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值