机器学习系列全集,301页PDF精心整理!

机器学习笔记PDF版本订阅

版权申明:特在此声明,“机器学习笔记(订阅版)”为本人独立工作成果,未经允许,不得转载。Copyright © 2020 Sakura-gh

关注微信公众号“Sakura的知识库”,或访问Github: https://github.com/Sakura-gh/ML-notes即可获取,持续更新中~
在这里插入图片描述

封面概览如下:
在这里插入图片描述

附:陆陆续续也已经更新了将近20w余字的笔记啦~学习的过程是孤独的,学习的结果是未知的,我们都在一条崭新的道路上前行,甚至每个人在路上的遭遇都各不相同。记录下李宏毅老师的机器学习笔记,并分享在github上,初心是做自我复习之用,没想到得到了这么多人的认可,受宠若惊。笔记的markdown、html版本已经全部开源在github上,当然也有很多小伙伴们向我反映pdf版本的笔记用起来更方便,既可以在ipad上直接观看,也可以打印下来随时查看学习,于是我将其整理成一整份文档,以电子书籍的形式呈现出来。累计20w余字的电子书籍笔记~开源不易,也希望大家能多多支持。

此外,这里承诺:github上可在线观看的html版本将永久免费开源,供大家学习参考之用!

上述PDF对应我的csdn文章链接:

csdn博客链接:

机器学习系列1-机器学习概念及介绍

机器学习系列2-回归案例研究

梯度下降代码举例:Gradient Descent Demo(Adagrad)

机器学习系列4-模型的误差来源及减少误差的方法

机器学习系列5-梯度下降法

机器学习系列6-分类问题(概率生成模型)

机器学习系列7-逻辑回归

机器学习系列8-深度学习简介

机器学习系列9-反向传播

机器学习系列10-手写数字识别(Keras2.0)

机器学习系列11-卷积神经网络CNN part1

机器学习系列12-卷积神经网络CNN part2

机器学习系列13-深度学习的技巧和优化方法

机器学习系列14-为什么要做“深度”学习

机器学习系列15-半监督学习

机器学习系列16-无监督学习引言

机器学习系列17-无监督学习之PCA推导(Ⅰ)

机器学习系列18-无监督学习之PCA深入探讨(Ⅱ)

机器学习系列19-矩阵分解&推荐系统初步

机器学习系列20-无监督学习之词嵌入

机器学习系列21-无监督学习之近邻嵌入

机器学习系列22-无监督学习之自编码器

机器学习系列23-无监督学习之生成模型

机器学习系列24-迁移学习

机器学习系列25-支持向量机

机器学习系列26-循环神经网络RNN(Ⅰ)

机器学习系列27-循环神经网络RNN(Ⅱ)

  • 26
    点赞
  • 161
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
25篇机器学习经典论文合集,有需要欢迎积分自取 Efficient sparse coding algorithms论文附有代码 [1] Zheng S, Kwok J T. Follow the moving leader in deep learning[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 4110-4119. [2] Kalai A, Vempala S. Efficient algorithms for online decision problems[J]. Journal of Computer and System Sciences, 2005, 71(3): 291-307. [3] Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference for Learning Representations, 2015. [4] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms[C]//Advances in neural information processing systems. 2007: 801-808. [5] Fan J, Ding L, Chen Y, et al. Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery[J]. 2019. [6] Z. Lai, Y. Chen, J. Wu, W. W. Keung, and F. Shen, “Jointly sparse hashing for image retrieval,” IEEE Transactions on Image Processing, vol. 27, no. 12, pp. 6147–6158, 2018. [7] Z. Zhang, Y. Chen, and V. Saligrama, “Efficient training of very deep neural networks for supervised hashing,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2016, pp. 1487–1495. [8] Wei-Shi Zheng, Shaogang Gong, Tao Xiang. Person re-identification by probabilistic relative distance comparison[C]// CVPR 2011. IEEE, 2011. [9] Liao S, Hu Y, Zhu X, et al. Person re-identification by local maximal occurrence representation and metric learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 2197-2206. [10] Liu X, Li H, Shao J, et al. Show, tell and discriminate: Image captioning by self-retrieval with partially labeled data[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 338-354. [11] Yao T, Pan Y, Li Y, et al. Exploring visual relationship for image captioning[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 684-699. [12] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang., ”Image Super-Resolution Using Deep Convolutional Networks, ” IEEE Transactions on Pattern Analysis and Machine Intelligence, Preprint, 2015. [13] M. D. Zeiler, D. Krishnan, Taylor, G. W., and R. Fergus, "Deconvolutional networks," in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recog., 2010, pp. 2528-2535. [14] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580-587. [15] Girshick R . Fast R-CNN[J]. Computer Science, 2015. [16] Joseph Redmon, Santosh Divvala, Ross Girshick, et al. You Only Look Once: Unified, Real-Time Object Detection[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. [17] LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [18] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. science, 2006, 313(5786): 504-507. [19] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105. [20] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//European conference on computer vision. Springer, Cham, 2014: 818-833. [21] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9. [22] Wu, Y., & He, K. (2018). Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). [23] Goodfellow I,Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems. 2014: 2672-2680. [24] Tran, L., Yin, X., & Liu, X. (2017). Disentangled representation learning gan for pose-invariant face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1415-1424). [25] Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., & Carin, L. (2016). Variational autoencoder for deep learning of images, labels and captions. In Advances in neural information processing systems (pp. 2352-2360).

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值