XGBoost 3 - XGBoost原理及调用 XGBoost原理Boosting AdaBoostGradient BoostingXGBoost1 - BoostingBoosting: 将弱学习器组合成强分类构造一个性能很高的强学习器是一件很困难的事情但构造一个性能一般的弱学习器并不难弱学习器:性能比随机猜测好(层数不深的CART是一个好选择)G(x)=∑t=1Tαtϕt(x)G(x)=∑t=1Tα...
XGBoost 2 - 机器学习基础 2 - 机器学习基础监督学习分类回归树随机森林2.1 - 监督学习模型参数目标函数 损失函数正则项优化2.1.1 - 模型若y为离散值,则为分类问题;若y为连续值,则为回归问题。对于给定的x如何预测标签y^y^\hat{y}? - 对于回归问题中的线性回归,其模型为:y^=f(x)=∑jwjxjy^=f(x)=∑jwjxj\hat{y} = f...
XGBoost 1 - 基础及简单调用 XGBoostextreme gradient boosting, 是gradient boosting machine的优化实现,快速有效。xgboost简介 xgboost特点xgboost基本使用指南xgboost理论基础 supervise learningCARTboostinggradient boostingxgboostxgboost实战 特征工程参...
machine learning博客索引 本系列为台大林轩田老师《机器学习基石》和《机器学习技法》课程的部分学习笔记。机器学习基础机器学习笔记-Nonlinear Transformation机器学习笔记-Hazard of Overfitting机器学习笔记-Regularization机器学习笔记-Validation机器学习笔记-线性回归机器学习笔记-Logistic回归机器学习笔记-利用线性模型进行分类SV...
deep learning博客索引 Course1 Week2-foundation of neural network Week3-one hidden layer neural network Week4-deep neural network Course2 Week1-setting up your ML application ...
Course4-week4-face recognition and neural style transfer 1 - what is face recognition?This week will show you a couple important special applications of CONVnet, we will start with face recognition and then go on to neural style transfer. Verification:...
Course4-week3-object detection 1 - object localizationIn order to build up the object detection, we first learn about object localization. image classification: the algorithm look at the picture and responsible for saying ...
Course4-week2-case studies case studies1 - why look at cases studies?how the together the basic building block, such as CONV layer, POOL layer, FC layer, to form effective convolutinal neural network?outline:classic ...
Course4-week1-convolutional neural network 1 - Computer visionComputer vision problem:images recognitionobject detectionstyle transfer One of the challenges of the computer vision problem is that input can get really big. Th...
Course3 - machine learning strategy 2 1 - carrying out error analysisIf the learning algorithm is not yet at the performance of a human, then manually examiming mistakes that the algorithm is making can give us a insight into what to do...
Course3 - machine learning strategy 1 introduction to ML strategy1 - why ML strategy?How to structure machine learning project, that is on the machine learning strategy. What is machine learning strategy, let’s say we are working ...
Course2-week3-hyperparameterTuning - BatchNormalization - Framework hyperparameter tuning1 - tuning processHow to systematically organize hyperparameters tuning process?hyperparameterslearning rate αα\alphaββ\beta in momentum, or set the default 0.9mini-b...
Course2-week2-optimization algorithm optimization algorithms1 - mini-batch gradient descentvectorization allows you to efficiently compute on m examples.But if m is large then it can be very slow. With the implement of graident des...
Course2-week1-setting up your ML application setting up your ML application1 - train/dev/test setThis week we’ll learn the partical aspects of how to make your neural network work well, ranging from things like hyperparameters tuning to ho...
Course1-week4-deep neural network 4.1 - deep L-layer neural networkWe have seen forward propagation and backward propagation in the context of a neural network with a single hidden layer as well we the logistic regression, and we le...
Course1-week3-one hidden layer neural network 3.1 - neural networks overviewSome new notation have been introduce, we’ll use superscript square bracket 1 to refer to the layer of neural network, for instance, w[1]w[1]w^{[1]} representing the pa...
Course1-week2-foundation of neural network Week 2Basics of Neural Network Programming2.1 binary classificationmmm training example: (x(1),y(1)),⋯,(x(m),y(m))(x(1),y(1)),⋯,(x(m),y(m))(x^{(1)}, y^{(1)}), \cdots, (x^{(m)}, y^{(m)}) X=⎡...
keras学习简易笔记 Keras:基于Python的深度学习库Keras是一个高层神经网络API,Keras由纯Python编写而成并基Tensorflow、Theano以及CNTK后端。Keras适用的Python版本是:Python 2.7-3.6。1 - 一些基本概念1.1 - 符号计算Keras的底层库使用Theano或TensorFlow,这两个库也称为Keras的后端。无论是Thea...
lambda函数的用法简记 lambda函数lambda是一个匿名函数,其语法为:lambda parameters:express一般用法import numpy as npsigmoid = lambda x:1./(1.+np.exp(-x))sigmoid(np.array([-10, 0, 10]))array([ 4.53978687e-05, 5.00000000e-01, 9.9