机器学习论文与书籍推荐

今天在网上找到转载的《机器学习推荐论文与书籍》,看起来不错,无出处。搜索得知为水木社区某神童编写,可惜找不到原文链接。所以这里把里面的东西整理一下,收集打包至网盘(没有包含的标上了“缺”字),方便爱好研究的朋友~

(因本人才疏学浅,如有查找错误,还请见谅……)

基本模型

HMM (Hidden Markov Models,隐含马尔可夫模型)
[1] A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition

ME (Maximum Entropy,最大熵)
[2] A Maximum Entropy Approach to Natural Language Processing

MEMM (Maximum Entropy Markov Models,最大熵马尔可夫模型)
[3]  Maximum Entropy Markov Models for Information Extraction and Segmentation

CRF (Conditional Random Fields,条件随机场)
[4] An Introduction to Conditional Random Fields for Relational Learning
[5] Conditional Random Fields – Probabilistic Models for Segmenting and Labeling Sequence Data

SVM (Support Vector Machine,支持向量机)
[6] 统计学习理论 (张学工) # 缺

LSA/LSI (Latent Sematic Analysis / Indexing)
[7] An Introduction to Latent Semantic Analysis

pLSA/pLSI (Probablistic Latent Sematic Analysis / Indexing)
[8] Probabilistic Latent Semantic Analysis

LDA(Latent Dirichlet Allocation)
[9] Latent Dirichlet Allocation # 用变分理论和最大化期望算法求解模型
[10] Parameter estimation for text analysis # 用吉布斯采样求解模型

Neural Networks (神经网络,包括霍普菲尔模型、自组织映射、随机网络、玻尔兹曼机等)
[11] Neural Networks – A Systematic Introduction

Diffusion Networks (扩散网络)
[12] Diffusion Networks, Products of Experts, and Factor Analysis

Markov Random Fields (马尔可夫随机场)

Generalized Linear Model (广义线性模型,包括逻辑回归等)
[13] An introduction to Generalized Linear Models (2nd) # 有第三版,不过我没寻到

Chinese Restaurant Model (中餐馆模型?,狄利克雷过程)
[14] Dirichlet Processes, Chinese Restaurant Processes and all that
[15] Estimating a Dirichlet Distribution

一些重要算法

EM (Expectation Maximization,期望值最大化)
[16] Expectation Maximization and Posterior Constraints
[17] Maximum Likelihood from Incomplete Data via the EM Algorithm

MCMC & Gibbs Sampling (马尔可夫链蒙特卡罗算法与吉布斯采样)
[18] Markov Chain Monte Carlo and Gibbs Sampling
[19] Explaining the Gibbs Sampler
[20] An Introduction to MCMC for Machine Learning

PageRank

矩阵分解算法
SVD、QR 分解、Shur 分解、LU 分解、谱分解

Boosting (包括 Adaboost)
[21] adaboost_talk

Spectral Clustering (谱聚类)
[22] A Tutorial on Spectral Clustering

Energy-Based Learning
[23] A Tutorial on Energy-Based Learning

Belief Propagation (置信传播)
[24] Understanding Belief Propagation and its Generalizations
[25] Construction free energy approximation and generalized belief propagation algorithms
[26] Loopy Belief Propagation for Approximate Inference An Empirical Study
[27] Loopy Belief Propagation

AP (Affinity Propagation,亲缘传播)
[28] Affinity Propagation

L-BFGS
[29] 最优化理论与算法(第二版) 第十章 # 缺
[30] On the Limited Memory BFGS Method for Large Scale Optimization

IIS (Improved Iterative Scaling,改进迭代算法)
[31] Improved Iterative Scaling Algorithm – Parameter Estimation of Feature-based Model

理论部分

Probabilistic Networks (概率网络)
[32] An Introduction to Variational Methods
[33] Factor Graphs and the Sum-Product Algorithm
[34] Constructing Free Energy Approximations and Generalized Belief Propagation Algorithms
[35] Graphical Models, Exponential Families, and Variational Inference

Variational Theory (变分理论) # 我们只用概率网络上的变分
[36] Tutorial on variational approximation methods
[37] A Variational Bayesian Framework for Graphical Models
[38] Tutorial on variational approximation methods (ppt) # 原文为 variational tutorial.pdf,未寻到

Information Theory (信息论)
[39] Elements of Information Theory (2nd)

测度论
[40] 测度论(Halmos) # 缺,据说写得好但是有点过时了
[41] 测度论讲义(严加安)

概率论
[42] 概率与测度论 # 缺

随机过程
[43] 应用随机过程(林元烈) # 缺
[44] 随机数学引论(林元烈) # 缺

Matrix Theory (矩阵论)
[45] 矩阵分析与应用(张贤达) # 缺

模式识别
[46] 模式识别 (2nd)(边肇祺) # 缺
[47] Pattern Recognition and Machine Learning

最优化理论
[48] Convex Optimization
[49] 最优化理论与算法(陈宝林)

泛函分析
[50] 泛函分析导论及应用

核方法
[51] 模式分析的核方法 # 缺

统计学
[52] 统计手册 # 缺

综合部分

Semi-Supervised Learning (半监督学习)
[53] Semi-Supervised Learning (MIT Press) # 缺
[54] Graph-Based Semi-Supervised Learning

Co-Training (协同训练)

Self-Training (自我训练)

网盘打包下载:http://115.com/file/dpu3ribw#机器学习论文与书籍推荐.7z

感谢书单原作者的无私奉献。

Posted in 机器学习.
  • 0
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
25篇机器学习经典论文合集,有需要欢迎积分自取 Efficient sparse coding algorithms论文附有代码 [1] Zheng S, Kwok J T. Follow the moving leader in deep learning[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 4110-4119. [2] Kalai A, Vempala S. Efficient algorithms for online decision problems[J]. Journal of Computer and System Sciences, 2005, 71(3): 291-307. [3] Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference for Learning Representations, 2015. [4] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms[C]//Advances in neural information processing systems. 2007: 801-808. [5] Fan J, Ding L, Chen Y, et al. Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery[J]. 2019. [6] Z. Lai, Y. Chen, J. Wu, W. W. Keung, and F. Shen, “Jointly sparse hashing for image retrieval,” IEEE Transactions on Image Processing, vol. 27, no. 12, pp. 6147–6158, 2018. [7] Z. Zhang, Y. Chen, and V. Saligrama, “Efficient training of very deep neural networks for supervised hashing,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2016, pp. 1487–1495. [8] Wei-Shi Zheng, Shaogang Gong, Tao Xiang. Person re-identification by probabilistic relative distance comparison[C]// CVPR 2011. IEEE, 2011. [9] Liao S, Hu Y, Zhu X, et al. Person re-identification by local maximal occurrence representation and metric learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 2197-2206. [10] Liu X, Li H, Shao J, et al. Show, tell and discriminate: Image captioning by self-retrieval with partially labeled data[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 338-354. [11] Yao T, Pan Y, Li Y, et al. Exploring visual relationship for image captioning[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 684-699. [12] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang., ”Image Super-Resolution Using Deep Convolutional Networks, ” IEEE Transactions on Pattern Analysis and Machine Intelligence, Preprint, 2015. [13] M. D. Zeiler, D. Krishnan, Taylor, G. W., and R. Fergus, "Deconvolutional networks," in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recog., 2010, pp. 2528-2535. [14] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580-587. [15] Girshick R . Fast R-CNN[J]. Computer Science, 2015. [16] Joseph Redmon, Santosh Divvala, Ross Girshick, et al. You Only Look Once: Unified, Real-Time Object Detection[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. [17] LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [18] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. science, 2006, 313(5786): 504-507. [19] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105. [20] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//European conference on computer vision. Springer, Cham, 2014: 818-833. [21] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9. [22] Wu, Y., & He, K. (2018). Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). [23] Goodfellow I,Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems. 2014: 2672-2680. [24] Tran, L., Yin, X., & Liu, X. (2017). Disentangled representation learning gan for pose-invariant face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1415-1424). [25] Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., & Carin, L. (2016). Variational autoencoder for deep learning of images, labels and captions. In Advances in neural information processing systems (pp. 2352-2360).

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值