机器学习
文章平均质量分 89
LiuSpark
这个作者很懒,什么都没留下…
展开
-
20161113#CVPR#学习笔记
20161113CVPR学习笔记原创 2016-11-24 18:29:43 · 409 阅读 · 0 评论 -
#cs231n#Assignment2:Dropout.ipynb
根据自己的理解和参考资料实现了一下 Dropout.ipynbDropoutDropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement原创 2017-03-19 10:25:17 · 1515 阅读 · 0 评论 -
#cs231n#相关资源
CS231n Notes CS231n教学计划 网易公开课CS231n—深度学习与计算机视觉 xieyi4650的博客CS231n专题原创 2017-02-25 10:31:02 · 379 阅读 · 0 评论 -
20170226#cs231n#4.Backpropagation反向传播
Backpropagation反向传播反向传播是一种利用链式法则chain rule去递归计算表达式梯度的方法 训练集的xix_i的梯度仍然是有用的,例如在将神经网络所做的事可视化以便于理解的时候 参考资料:CS231n课程笔记翻译:反向传播笔记-知乎智能单元-杜客ChainRule链式法则∂f∂x=∂f∂q∂q∂x\frac{\partial f}{\partial x}=\frac{\par原创 2017-02-26 23:12:10 · 732 阅读 · 0 评论 -
20170316#cs231n#8.Convolutional Neural Networks
Convolutional Neural Networks (CNNs / ConvNets)原创 2017-03-16 19:41:41 · 2283 阅读 · 0 评论 -
#cs231n#Assignment2:ConvolutionalNetworks.ipynb
根据自己的理解和参考资料实现了一下 ConvolutionalNetworks.ipynbConvolutional NetworksSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network archit原创 2017-03-20 09:32:34 · 2056 阅读 · 0 评论 -
20170328#cs231n#12.CNNs in Practice
得到更多数据在小数据集用数据增强和迁移学习是很合适的Data augmentation数据增强这是一种很容易实现的方法。 在图片输入CNN之前先做一些变化,应该是改变了每一个pixel,但是图片的label保持不变。可能减小过拟合的发生。Horizontal flips 水平翻转Random crops/scales 随机裁剪/缩放图片(在test的时候要对test set进行裁剪缩放水平翻转原创 2017-03-28 21:29:55 · 661 阅读 · 0 评论 -
20170325#cs231n#10.Understanding and Visualizing Convolutional Neural Networks
Visualize patches that maximally activate neurons这个意思是是把数据输入某一层中,然后看数据的哪一部分最能激活这层的神经元Visualize the filters/kernels (raw weights) 但对高层的weight可视化的意义就不是特别大了Visualizing the representationt-SNE visualizati原创 2017-03-25 15:32:09 · 1793 阅读 · 0 评论 -
20170324#cs231n#9.ConvNets for spatial localization & Object detection
Computer Vision Tasks原创 2017-03-24 11:08:25 · 909 阅读 · 0 评论 -
20170402#cs231n#13.Others
Segmentation 分割上图中没有对四只牛进行具体的分割,因为四只牛混在一块了,这是 Semantic Segmentation Instance Segmentation 实例分割又称为实时检测与分割 需要把每个pixel分到具体的类上面去实际中语义分割和实例分割是分开进行的Semantic Segmentation 语义分割multi-scaleRefinementUpsampling原创 2017-04-02 09:34:13 · 972 阅读 · 0 评论 -
20170326#cs231n#11.Recurrent Neural Networks 循环神经网络RNN
主要参考资料简书——[译] 理解 LSTM 网络 csdn——循环神经网络(RNN, Recurrent Neural Networks)介绍 cs231n RNN PPTRNNkarpathy/min-char-rnn.py https://gist.github.com/karpathy/d4dee566867f8291f086 图片转文字描述CNN→RNN把CNN最后的分类器去掉,然后原创 2017-03-26 14:48:44 · 2093 阅读 · 0 评论 -
Deconv
https://github.com/vdumoulin/conv_arithmetichttps://www.zhihu.com/question/43609045/answer/98699795https://www.zhihu.com/question/43609045/answer/132235276https://www.zhihu.com/question/43609045/answer转载 2017-04-23 19:34:18 · 549 阅读 · 0 评论 -
再生核Hilbert空间(RKHS)&SVM
Reproducing Kernel Hilbert Space(RKHS)http://www.cnblogs.com/murongxixi/p/3480851.htmlhttp://blog.pluskid.org/?page_id=683转载 2017-05-14 13:15:43 · 1249 阅读 · 0 评论 -
Inception Score&Mode Score
Inception ScoreMode ScoreReference[1]Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li. Mode Regularized Generative Adversarial Networks. In ICLR2017. [2]Improved Techniques for Trainin转载 2017-05-19 21:00:09 · 6947 阅读 · 0 评论 -
#cs231n#Assignment2:BatchNormalization.ipynb
根据自己的理解和参考资料实现了一下 BatchNormalization.ipynbBatch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam.原创 2017-03-19 10:24:06 · 2081 阅读 · 0 评论 -
#cs231n#Assignment2:FullyConnectedNets.ipynb
根据自己的理解和参考资料实现了一下 FullyConnectedNets.ipynbFully-Connected Neural NetsIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but原创 2017-03-19 10:20:22 · 4264 阅读 · 0 评论 -
20170307#cs231n#7.Neural Networks Part 3: Learning and Evaluation
Gradient Checks/momentum/AdaGrad/RMSProp/Adam/...原创 2017-03-07 14:22:46 · 978 阅读 · 0 评论 -
20161206#cs231n#2.线性分类器 Assignment1--SVM&Softmax
Linear classification: Support Vector Machine, SoftmaxLinear Classifier线性分类器原创 2016-12-08 12:50:28 · 619 阅读 · 0 评论 -
20170202 Coursera Stanford-MachineLearning/Week9
Week9:Anomaly detection/Recommender Systems 异常检测/推荐系统Anomaly detection 异常检测 训练样本在中心的概率最大所以test如果在中心表明正常Gaussian Distribution 高斯分布(正态分布)x∼N(μ,σ2)P(x;μ,σ2)=12π‾‾‾√σe−(x−μ)22σ2x \sim N(\mu,\sigma^{2}) \\原创 2017-02-02 20:52:28 · 1122 阅读 · 0 评论 -
20170129 Coursera Stanford-MachineLearning/Week8
Week8:Unsupervised Learning非监督学习Supervised learning & Unsupervised learning 监督学习与非监督学习Supervised learning 监督学习:给出y值,已经为数据提供label了Unsupervised learning 非监督学习:从未标记的数据中学习,要求算法替我们分析出数据的结构Clustering 聚类算法 聚原创 2017-01-29 18:06:14 · 908 阅读 · 0 评论 -
20170125 Coursera Stanford-MachineLearning/Week4-5
Week4/5:Neural Networks: Representation/Learning原创 2017-01-25 11:58:01 · 502 阅读 · 0 评论 -
20170123 Coursera Stanford-MachineLearning/Week7
Week7:Support Vector Machine(SVM) 支持向量机SVM又称为大间距分类器(Large Margin Classifier)原创 2017-01-23 17:22:11 · 756 阅读 · 0 评论 -
20161129 Coursera Stanford-MachineLearning/Week6
Week6:Advice for Applying Machine Learning&Machine Learning System Design原创 2016-11-29 21:45:09 · 488 阅读 · 0 评论 -
20161124 Coursera Stanford-MachineLearning/Week1-3
学习笔记原创 2016-11-24 18:25:40 · 441 阅读 · 0 评论 -
20161202 Coursera Stanford-MachineLearning/Week10-11
Week10:Large Scale Machine LearningStochastic Gradient Descent原创 2016-12-05 00:10:57 · 805 阅读 · 0 评论 -
20161106#cs231n#1.最近邻分类器 Assignment1-KNN
最近邻分类器原创 2016-11-24 18:30:39 · 734 阅读 · 0 评论 -
20170228#cs231n#5.Neural Networks Part 1神经网络1 /Assignment1-NeuralNetwork
Neural Networks Part 1: Setting up the Architecture这个是NeuralNetwork的计算公式s=W2max(0,W1x)s = W_2 \max(0, W_1 x) xx [3072×1] W1W_1[100×3072] W2W_2 [10×100] max函数是一个非线性函数,与我们之前的不同,正是因为这个改变使得其与线性函数不同。 参数W原创 2017-02-28 10:28:13 · 661 阅读 · 0 评论 -
20170304#cs231n#6.Neural Networks Part 2: Setting up the Data and the Loss
Data Preprocessing 数据预处理Mean subtraction 零均值化:对每一个特征减去均值 X -= np.mean(X, axis = 0)Normalization 归一化:把数据的所有维度归一化,使其数值范围近似相等,归一化操作只有在确定不同输入特征有不同数值范围(或单位)才有意义,但是图像处理中由于范围为[0,255]所以这个数据预处理的步骤意义不是特别大。归一化大原创 2017-03-04 18:12:30 · 789 阅读 · 0 评论 -
20170225#cs231n#3.最优化问题
Optimization:Stochastic Gradient DescentOptimization是寻找一个WW使得LossFunction最小化的过程SVMcostFunction就是一个convex function(凸函数),然后这就涉及到了 ConvexOptimization(凸优化) 但是神经网络的代价函数就是non-convex的了然后会有很多lossfunction是Non原创 2017-02-25 16:56:04 · 630 阅读 · 0 评论 -
GAN——ModeCollapse
ModeCollapse原创 2017-05-21 13:54:31 · 15579 阅读 · 4 评论