Deep Learning
文章平均质量分 79
zcx_language
这个作者很懒,什么都没留下…
展开
-
有关深度神经网络参数初始化的1.思考
文章目录初始化参数不能为零?测试 A输出结果测试B输出结果结论初始化参数不能为零?测试 Aimport torchimport torch.nn as nnimport torch.nn.functional as Ffrom torch.utils import datafeatures = torch.randn((100, 3))labels = torch.randn(100, 1)dataset = data.TensorDataset(features, labels)d原创 2020-07-11 12:11:44 · 225 阅读 · 0 评论 -
Loss Function Evolution for Face Recognition
文章目录Softmax LossSoftmaxSoftmax LossCenter Loss损失函数往往导向着模型的收敛方向。一个好的损失函数对于所要解决的问题至关重要。现如今,人脸识别方法都是将人脸映射为低维feature vector,通过对比feature vector之间的距离来判断该人脸是否属于同一Identity。由于人脸类别多,同一类别的样本量少,因此通过训练模型增加类间(inter-class)距离,减小类内(intra-class)距离成为人脸识别领域的主要优化方向。近些年人脸识别算法的原创 2020-05-13 15:07:09 · 530 阅读 · 0 评论 -
Class4-Week4 Face Recognition & Neural Style Transfer
文章目录Face RecognitionFace Verification vs. Face RecognitionSiamese NetworkTriplet LossFace Verification and Binary classificationNeural Style TransferWhat is Neural Style TransferWhy Intermediate Layer...原创 2019-11-24 16:28:09 · 660 阅读 · 0 评论 -
Class4-Week3 Object Detection
文章目录Object LocalizationLandmarks detectionObject DetectionSliding Window DetectionConvolutional Implementation of Sliding WindowsYOLO (You only look once)Intersection Over Union(Iou)Non-max Suppressio...转载 2019-10-07 12:30:10 · 354 阅读 · 0 评论 -
Class4-Week2 Case study
文章目录Classic NetworksLeNet-5AlexNetVGG-16Residual NetworksArchitectureWhy ResNets Works?Networks in network and 1x1 convolutionsInception NetworkArchitectureInception ModuleTransfer LearningData Argume...原创 2019-10-05 17:30:16 · 609 阅读 · 0 评论 -
Class4-Week1 Convolutional Neural Networks
Why Padding?The main benefits of padding are the following:It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deepe...原创 2019-09-29 11:12:48 · 133 阅读 · 0 评论 -
Class3-Week1 ML Strategy1
OrthogonalizationOrthogonalization means you should ensure that adjusting “one parameter” affects only the spectific aspect you want to optimize of your model.Chain of assumptions in MLFit trainin...原创 2019-09-19 21:19:58 · 297 阅读 · 0 评论 -
Class2-Week3 Hyperparameter Tuning
Hyperparameter TuningRecommended Order:Firstlyα\alphaαSecondlyβ\betaβ of momentum#hidden units#mini-batch sizeThird#layerslearning rate decayTry random values when choose paramet...原创 2019-09-16 12:59:41 · 524 阅读 · 0 评论 -
Class2-Week2 Optimization Algorithm
文章目录Mini-batch GradientImplementUnderstanding mini-batch gradient descentExponentially Weighted AverageGradient Descent with MomentumRMSProp(Root Mean Square Prop)Adam(Adaptive Moment Estimation)Learn...原创 2019-09-10 19:52:43 · 344 阅读 · 0 评论 -
Class2-Week1-Improving Deep Neural Networks
文章目录Setting up your Machine Learning ApplicationTrain/Dev/Test SetsBais/VarianceRegularizing Neural NetworkL2-RegularizationWhy Regularization reduces overfitting?Dropout RegularizationSetting up your...原创 2019-09-01 17:11:16 · 350 阅读 · 0 评论 -
Class1-Week4-Deep Neural Network
Compute ProcessForward PropagationLayer-l:Input: A[l−1]A^{[l-1]}A[l−1]Compute Process:Z[l]=W[l]A[l−1]+b[l]Z^{[l]}=W^{[l]}A^{[l-1]}+b^{[l]}Z[l]=W[l]A[l−1]+b[l]A[l]=g(Z[l])A^{[l]}=g(Z^{[l]})A...原创 2019-08-11 19:35:33 · 265 阅读 · 0 评论 -
Class1-Week3-Neural Networks Overview
文章目录Neural Network RepresentationCompute a Neural Network's OutputActivation FunctionWhy do you Need Non-linear Activation Functions?Activation Functions' ImageDerivatives of Activation FunctionsDiffe...原创 2019-08-11 19:31:06 · 365 阅读 · 0 评论 -
Class1-Week2-Neural Networks Basics
Logistic RegressionDescriptionLogistic regression is a learning algorithm used in a supervised learning problem when the output原创 2019-07-27 21:22:28 · 402 阅读 · 0 评论 -
Coursera-Machine Learning-ex5
Some Points: The Results:linearRegCostFunction.mfunction [J, grad] = linearRegCostFunction(X, y, theta, lambda)% Initialize some useful valuesm = length(y); % number of training examples%...原创 2019-01-04 10:03:39 · 300 阅读 · 0 评论 -
Coursera-Machine Learning-ex4
Some Points:Implement Steps:1.Pick a neural network architecture.number of input units = dimension of features number of output units = number of classes number of hidden layer = 1(Default), ...原创 2018-12-19 00:01:22 · 190 阅读 · 0 评论 -
Coursera-Machine Learning-ex1
Some Points:When features differ by orders of magnitude, first performing feature scaling can make gradient descent converge much more quickly. DO: Subtract the mean value of each feature from the d...原创 2018-11-19 10:17:31 · 168 阅读 · 0 评论 -
Coursera-Machine Learning-ex2
Some Points:We can use regularization to prevent overfitting. note the index of theta begin at 1 in the regularization formula. The Results:plotData.mpos = find(y == 1);neg = find(y == ...原创 2018-12-09 18:18:24 · 265 阅读 · 0 评论 -
Coursera-Machine Learning-ex3
Some Points:In the multi-class classification, the sum of probability of each class is not equal to 1 with the results of using logist regression to train each class.The Results:lrCostFunction.m...原创 2018-12-15 17:02:58 · 185 阅读 · 0 评论