计算机视觉
一叶知秋Autumn
这个作者很懒,什么都没留下…
展开
-
[论文阅读]Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection
[论文阅读]Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection用于任意形状文本检测的深度关系推理图网络文章收录于2020 CVPR[论文地址]https://arxiv.org/abs/2003.07493[代码地址]https://github.com/GXYM/DRRG文章目录摘要简介方...原创 2020-03-27 11:50:40 · 6774 阅读 · 0 评论 -
[论文阅读]Chinese Street View Text: Large-scale Chinese Text Reading with Partially Supervised Learning
文章目录摘要简介相关工作文本阅读基准端到端文本阅读弱监督和部分监督学习方法实验参考文献文章被收录于ICCV2019[论文地址]http://openaccess.thecvf.com/content_ICCV_2019/html/Sun_Chinese_Street_View_Text_Large-Scale_Chinese_Text_Reading_With_Partially_ICCV_2...原创 2020-03-20 11:01:47 · 1485 阅读 · 0 评论 -
[论文阅读]TextDragon: An End-to-End Framework for Arbitrary Shaped Text Spotting阅读笔记
TextDragon: An End-to-End Framework for Arbitrary Shaped Text Spotting阅读笔记文章被收录于ICCV2019[论文地址]:http://openaccess.thecvf.com/content_ICCV_2019/html/Feng_TextDragon_An_End-to-End_Framework_for_Arbitra...原创 2020-03-12 17:34:45 · 2205 阅读 · 0 评论 -
Caffe代码迁移为PyTorch的注意事项
进入研究生生活已经有三个月了,每天的生活就是上课、看论文、跑模型,也从来没有自己动手写过完整的模型。最近想在一个Caffe模型上加个模块,奈何实在改不动Caffe代码(C++忘的差不多了,日后一定好好看看Caffe底层),于是转战PyTorch。本文主要介绍一下注意事项,不具体展开介绍怎么样一步一步来,只说明一下大致步骤,具体细节只有做了才能明白。Caffe网络结构的迁移参考文章PyTo...原创 2019-12-16 20:09:12 · 1000 阅读 · 0 评论 -
cs231n assignment2 PyTorch
文章目录Barebones PyTorchThree-Layer ConvNetTraining a ConvNetPyTorch Module APIBarebones PyTorchThree-Layer ConvNet使用pytorch抽象等级1的方式实现卷积神经网络。three_layer_convnet()out1 = F.conv2d(x, conv_w1, bias=con...原创 2019-08-21 10:37:41 · 2611 阅读 · 0 评论 -
cs231n assignment2 ConvolutionalNetworks
文章目录ConvolutionNaive forward passNaive backward passConvolutionNaive forward pass实现卷积操作,按照卷积的流程,写出最能理解的代码。N, C, H, W = x.shapeF, _, HH, WW = w.shapestride, pad = conv_param['stride'], conv_param...原创 2019-08-11 20:49:58 · 1247 阅读 · 0 评论 -
cs231n assignment2 Batch Normalization
BatchNormalization的反向推导比之前的稍微复杂一些,但是画出计算图后,从后往前推导就会变的简单。文章目录Batch normalizationforwardbackwardLayer Normalization: ImplementationInline QuestionBatch normalizationforward首先实现layers.py中的batchnorm_f...原创 2019-08-02 10:30:21 · 1757 阅读 · 1 评论 -
cs231n assignment2 dropout
代码:https://github.com/LiuZhe6/CS231N为了防止神经网络过拟合数据,可以采用dropout方法。其主要思想是:对隐藏层中部分输出或者权重随机置为0。文章目录Dropoutforward passbackward passInline QuestionDropoutforward pass题目要求使用inverted dropout,其主要思想是在训练阶段在...原创 2019-08-03 11:08:52 · 1158 阅读 · 4 评论 -
cs231n assignment1 features
features前几个作业都是直接将图片的原始像素作为模型输入,本次作业是通过使用定向梯度直方图Histogram of Oriented Gradients (HOG)和HSV颜色空间。简单说,HOG不考虑图片的颜色信息,只捕获图片的纹理;而颜色直方图只考虑颜色,不考虑纹理。使用这两个方法一起工作,效果会更好。Train SVM on featureslearning_rates = [0...原创 2019-07-17 17:25:28 · 1713 阅读 · 0 评论 -
cs231n assignment2 Fully-connected Neural Network
开始了卷积神经网络的学习,这部分内容还是非常多的。本次作业分为五个部分:Q1:Fully-connected Neural Network、Q2:Batch Normalization、Q3:Dropout、Q4:Convolutional Networks、Q5:PyTorch / TensorFlow on CIFAR-10。以下内容为全连接网络内容,采用模块化的设计,思路非常清晰。准备工...原创 2019-07-29 11:35:36 · 2355 阅读 · 0 评论 -
cs231n assignment1 softmax
SoftmaxSoftmax的损失函数为Li=−logpyi=−log(efyi∑jefj)=−fyi+log∑jefjL_{i}=-\log p_{y_{i}}=-\log \left(\frac{e^{f_{y_{i}}}}{\sum_{j} e^{f_{j}}}\right)=-f_{y_{i}}+\log \sum_{j} e^{f_{j}}Li=−logpyi=−log(...原创 2019-07-12 11:48:38 · 1833 阅读 · 0 评论 -
cs231n assignment1 two-layer-net
two-layer-net首先完成神经网络对scores和损失函数的计算,其中激活函数使用RELU函数,即max(0,x)函数。neural_net.py的loss()函数# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****h1 = np.maximum(0,X.dot(W1) + b1)scores = h1.d...原创 2019-07-16 11:36:09 · 2004 阅读 · 1 评论 -
cs231n assignment1 SVM
上一篇:cs231n assignment1 knn文章目录SVMInline QuestionSVM支持向量机的损失函数为Li=∑j!=yimax(0,sj−syi+△)L_{i}=\sum_{j!=y_{i}} \max \left(0, s_{j}-s_{y_{i}}+\triangle\right)Li=j!=yi∑max(0,sj−syi+△)其中sjs_{j}s...原创 2019-07-10 17:20:29 · 1519 阅读 · 0 评论 -
cs231n assignment1 knn
文章目录准备阶段Assignment11. k-Nearest Neighbor classifier2. SVM3. Softmax classifier4. Two-Layer Neural Network5. Higher Level Representations: Image FeaturesInline ProblemInline Question 1Inline Question ...原创 2019-07-09 14:37:30 · 3505 阅读 · 7 评论