经典论文整理

自动编码器方向:

 

1.       1 .Bengio Yoshua. Learning deep architecturesfor AI[J].Foundations andTrends in Machine Learning,2009,2( 1) : 1-27.

2.       BengioY,LamblinP,Popovici D,et al,Greedylayer-wise training of deep networks[C]//Proceedings of 20th Annual Conferenceon Neural Information Processing Systems.Vancouver: Neuralinformation processing system foundation,2007: 153-160.

3.       WangW,Ooi BC,YangXY,et al. Effective multimodalretrieval based on stacked autoencoders[J].2014,7.

4.       Salakhutdinov R,HintonG. Semantic hashing[J]. InternationalJournal of Approximate Reasoning,2009,50.

5.       Goroshin R,LeCun Y. Saturating auto-encoders[DB/OL].[2014-05-08].http://arxiv.org/pdf/1301.3577.pdf.

6.       Jiang Xiaojuan,Zhang Yinghua,Zhang Wensheng,et al. Anovel sparseauto-encoder for deep unsupervised learning. 2013: 256-261.

7.       Vincent P,LarochelleH,LajoieI,et al.Stacked denoisingautoencoders: learning useful representations in a deep network with a localdenoising criterio[J].Journal ofMachine Learning Research,2010,11:3371-3408.

8.       Masci J,Meier U,CiresanD,et al.Stacked convolutionalauto-encoders for hierarchical feature extraction [C] Proceedings of21st International Conference on Artificial Neural Networks. Espoo: SpringerVerlag,2011: 52-59.

9.       Lee H , Ekanadham C , Ng A Y . Sparse deep beliefnet model for visual area V2 [C]//NIPS. 2007. 7. 873-880.

10.    Raina R, BattleA, Lee H,  et al. Self-taught learning: transferlearning from unlabeled data[C]//Proceedings of the 24th internationalconference on Machine learning.ACM, 2007: 759-766.

11.    Lee H,  Grosse R, Ranganath R,  et al.  Convolutional deep  belief networks for scalable  unsupervised learning of  hierarchical [C]//Proceedings  of the 26th  Annual representations International Conference on Machine Learning.ACM, 2009: 609-616.

 

受限玻尔兹曼机方向:

 

1.       BengioY,LamblinP,Popovici D,et al,Greedy layer-wisetraining of deep networks[C]//Proceedings of 20th Annual Conference onNeural Information Processing Systems.Vancouver: Neuralinformation processing system foundation,2007: 153-160.

2.       ChoKH,Raiko T,Ilin A,et al.A two-stage pretrainingalgorithm for deep boltzman nmachines[C]Proceedings of 23rdInternational Conference on Artificial Neural Networks.Sofia: SpringerVerlag,2013: 106-113.

3.       Hinton G E,Osindero S,Teh Y W.A fast learning algorithmfor deep belief nets[J].NeuralComputation,2006,18( 7) :1527-1554.

4.       Lee H,Grosse R,Ranganath R,et al.Unsupervised learningofhierarchical representations with convolutional deep belief networks[J].Communications ofthe ACM,2011,54( 10) : 95-103.

5.       Halkias X C,Paris S,Glotin H.Sparse penalty indeep belief networks: using the mixed norm constraint [DB/OL].[2014-05-08].http: arxiv.org/pdf/1301.3533.pdf.

6.       Poon-FengK , HuangDY,DongMH,et al. Acoustic emotionrecognition based on fusion of multiplefeature dependent deep Boltzman nmachines[C]Proceedings ofthe 9th International Symposiumon Chinese Spoken Language Processing. Singapore: IEEE,2014: 584-588.

7.       Taylor  G  W, Hinton  G  E, Roweis  S  T. Modeling human motion using binary latent variables [J].Advances in  neural information processing  systems, 2007,  19:1345.

 

卷积神经网络方向:

 

1.       LeCunY,Jackel LD,BottouL,et al. Learning Algorithmsfor Classification: A comparison on Handwritten Digit Recognition[M]. Korea:WorldScientific,1995,261-276.

2.       Krizhevsky A,Sutskever I,Hinton G E. Imagenet classificationwith deep convolutional neural networks[C].Proceeding of 26thAnnual Conference on Neural Information Processing Systems. LakeTahoe: Neuralinformation processing system foundation,2012:1097-1105.

3.       ZhengYi,Liu Qi,Chen Enhong,et al. Time series classificationusing multi-channels deep convolutional neural networks[C]Proceedings of15thInternational Conference on Web-Age Information Management.Macau: SpringerVerlag,2014: 298-310.

4.       SunYi,Wang Xiaogang,Tang Xiaoou. Deep learning facerepresentation from predicting 10,000 classes [C].//Proceedings of the IEEE ComputerSociety Conference on Computer Visionand Pattern Recognition. Columbus: IEEE ComputerSociety,2014:1891-1898.

5.       Shen Yelong,He Xiaodong,Gao Jianfeng,et al. Learning semanticrepresentations using convolutional neural networksfor Websearch[C]Proceedings ofthe companion publication of the 23rd international conference on Worldwide web companion. Seoul: IW3C2,2014:373-374.

 

香港中文大学的DeepID2项目将人脸识别率提高到了99.15%:

SUN Y, WANG X,TANG X. Deep learning face representation by joint identification-verification[ J ].CoRR, 2014: abs/1406. 4773.

 

S. Ji等提出一个三维卷积神经网络模型用于行为识别.该模型通过在空间和时序上运用三维卷积提取特征.:

JI S, XU W, YANGM, et al. 3D convolutional neural networks for human action recognition [ J ].Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2013, 35(1):221-231.

M. Baccouche等提出一种时序的深度学习模型,可在没有任何先验知识的前提下,学习分类人体行为.模型的第一步,是将卷积神经网络拓展到三维,自动学习时空特征.接下来使用RNN方法训练分类每个序列:

BACCOUCHE M,MAMALET F, WOLF C, et al. Sequential deep learning for human actionrecognition[C]//Human Behavior Understanding. Berlin: Springer, 2011:29-39.

 

Lee等人提出卷积限制玻尔兹曼机(CRBM)的算法来使得深度学习可以处理通常意义下的图像:

Lee H, Grosse R,Ranganath R, et al.  Convolutional deepbelief networks for scalable unsupervised learning of hierarchical representations[C] //Proceedings of the 26th  Annuallnternational Conference on Machine Learning. New  York, NY, LSA:ACM, 2009 : 609} 16.

神经自回归分布估计器能够比限制玻尔兹曼机更加有效地对输入样本的分布进行估计:

Larochelle H,Murray I. fhe neural autoregressive distribution estimator [C] //Proceedings ofthe 14th lnternational  Conference onArtificial lntelligence and Statistics, A1STATS 2011. Fort Lau- derdale,FL,Unitedstates: Microtome Publishing,2011:29-37.

Glieng Y,Gliang YJ,Larochelle H. A supervised neural antore-gressive topic model forsimultaneous image classification and an notation : 1305. 5306 [R]. New York,USA:Cornell University,2013.

Ji等人提出多层的卷积神经网络来学习视频块的时空特征,并通过卷积操作来实现对整个视频的特征学习,从而替代之前的时空兴趣点检测和特征描述符提取:

Ji S, Xu W, YangM, et al. 3D convolutional neural networks for human action recognition [J]. IEEETtransactions on Pattern Analysis and Machine Intelligence,2013,35 (1):221-231.

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值