机器学习准备的面试题

1, 计算卷积参数数目
https://www.cnblogs.com/hejunlin1992/p/7624807.html
http://blog.csdn.net/dcxhun3/article/details/46878999

2,OneClassSVM——无监督︱异常、离群点检测 一分类
http://blog.csdn.net/sinat_26917383/article/details/76647272

3,画卷积网络及代码实现,了解caffe各个层的参数意义
http://www.cnblogs.com/denny402/tag/caffe/default.html?page=2
http://blog.csdn.net/liyuan123zhouhui/article/details/70858472(不懂这里关于gropu的介绍。)
选一个CNN网络,比如Alexnet,从输入层到输出层,画出网络结构及中间卷积等计算过程。

4,ROC曲线含义(横坐标为假正率,纵坐标为真正率,越靠近左上角越好)
http://blog.csdn.net/pipisorry/article/details/51788927
http://blog.csdn.net/abcjennifer/article/details/7359370

5,AUC曲线( ROC曲线下面的面积),精确率,召回率,F1值
http://blog.csdn.net/pzy20062141/article/details/48711355

6,为什么会出现过拟合和欠拟合,怎么解决
https://www.zhihu.com/question/59201590/answer/167392763
https://zhuanlan.zhihu.com/p/29707029

7,如何解决机器学习中数据不平衡问题
http://blog.csdn.net/lujiandong1/article/details/52658675
https://www.nowcoder.com/questionTerminal/f0edfb5a59a84f10bf57af0548e3ec02?toCommentId=78036
(10算是均匀的话,可以将多数类分割成为1000份。然后将每一份跟少数类的样本组合进行训练得到分类器。而后将这1000个分类器用assemble的方法组合位一个分类器)

8,我的模型:最后一层用的是softmax
http://www.jianshu.com/p/dcf5a0f63597
https://www.zhihu.com/question/23765351/answer/139826397

9,对逻辑回归的理解及逻辑回归和SVM的区别
http://www.cnblogs.com/ModifyRong/p/7739955.html
http://www.jianshu.com/p/19ca7eb549a7
https://www.zhihu.com/question/24904422/answer/92164679

10,Bagging和Boosting的区别?为什么Bagging减少variance,而Boosting减少bias?
https://www.cnblogs.com/earendil/p/8872001.html

11,RF、GBDT、XGBoost面试级整理,原理及区别(Bagging,Boosting)
https://www.cnblogs.com/hrlnw/p/3850459.html(RF)
http://www.cnblogs.com/maybe2030/p/4585705.html(RF)
https://www.cnblogs.com/ModifyRong/p/7744987.html(GBDT原理)
https://blog.csdn.net/github_38414650/article/details/76061893(XGBoost原理)
http://blog.csdn.net/qq_28031525/article/details/70207918
http://blog.csdn.net/abcjennifer/article/details/8164315
http://blog.csdn.net/qccc_dm/article/details/63684453(RF、GBDT、XGBoost比较)
https://www.zhihu.com/question/54626685?from=profile_question_card

12,LightGBM介绍,原理、改进简述
https://blog.csdn.net/petoilej/article/details/79164316
https://blog.csdn.net/niaolianjiulin/article/details/76584785

13,神经网络的激活函数
https://zhuanlan.zhihu.com/p/32610035
https://www.jianshu.com/p/22d9720dbf1a
https://www.v2ex.com/t/340003
https://zhuanlan.zhihu.com/p/22142013

14,为什么 LR 模型要使用 sigmoid 函数,背后的数学原理是什么?
https://www.zhihu.com/question/35322351
https://blog.csdn.net/zjuPeco/article/details/77165974(逻辑回归及推导,梯度下降法求解)
https://blog.csdn.net/xbmatrix/article/details/69367428(LR为什么要把连续特征离散化为0,1)

15,代价函数/损失函数
http://blog.csdn.net/u010976453/article/details/78488279
https://blog.csdn.net/qq547276542/article/details/7798004
https://blog.csdn.net/google19890102/article/details/50522945
https://blog.csdn.net/u013527419/article/details/60322106(交叉熵代价函数)
https://blog.csdn.net/u012162613/article/details/44239919

16,L1,L2正则化的区别
https://blog.csdn.net/gshgsh1228/article/details/52199870(正则化及正则化项的理解)
https://blog.csdn.net/cs24k1993/article/details/79683042

17,RF和XGBOOST的列采样是什么意思?
https://www.cnblogs.com/SpeakSoftlyLove/p/5256131.html

18,SVM常用核函数
https://blog.csdn.net/batuwuhanpei/article/details/52354822
https://www.zhihu.com/question/21883548

19,SVM问题整理
https://blog.csdn.net/gao1440156051/article/details/61435358

20,KKT条件
https://blog.csdn.net/johnnyconstantine/article/details/46335763

21,PCA、LDA介绍
https://blog.csdn.net/shizhixin/article/details/51181379
https://blog.csdn.net/yaoqi_isee/article/details/71036320
https://blog.csdn.net/itplus/article/details/11452743

22,关联规则Apriori算法
https://blog.csdn.net/qq_16093959/article/details/78524068

23,K-Means及其变形、K-mediods 与 DBSCAN、K-NN
https://www.cnblogs.com/pangxiaodong/archive/2011/08/23/2150183.html(综述、总结)
https://blog.csdn.net/u011204487/article/details/59624571
https://blog.csdn.net/taoyanqi8932/article/details/53727841
https://blog.csdn.net/expleeve/article/details/46730705
https://blog.csdn.net/zhouxianen1987/article/details/68945844
https://www.cnblogs.com/hdu-2010/p/4621258.html
https://www.cnblogs.com/ybjourney/p/4702562.html(K-NN介绍)
https://www.cnblogs.com/lighten/p/7593656.html(优缺点)

24,线性分类器和非线性分类器
https://blog.csdn.net/u014755493/article/details/70182532

25,模型融合的方法
https://blog.csdn.net/sinat_29819401/article/details/71191219

26,感受野的计算
https://zhuanlan.zhihu.com/p/26663577

27,最优化方法(牛顿法、拟牛顿法、梯度下降法,共轭梯度下降法)
https://www.cnblogs.com/shixiangwan/p/7532830.html
https://blog.csdn.net/qq_36330643/article/details/78003952(牛顿)
https://blog.csdn.net/qq547276542/article/details/78186050(共轭)

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值