![](https://img-blog.csdnimg.cn/20201014180756925.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
cv
masonwang_513
Algorithm engineer in computer vision
展开
-
Group Normalization vs Batch Normalization
BN 存在哪些问题:1. BN 依赖大batch size, 当 batch size 太小时, batch statistics 变得不准确; 而显存限制了batch size变大,尤其在检测、分割等比较占用显存的模型上。 batch size上又是一个工程问题, 毕竟去年的coco,Face++主要赢在大batch上,这是最重要的motivation。2.BN要求batch分布比较理...原创 2020-03-03 17:58:43 · 333 阅读 · 0 评论 -
Normalization vs Regularization in Machine or Deep learning
Normalisationadjusts the data;regularisationadjusts the prediction function.转载 2020-03-03 13:44:55 · 142 阅读 · 0 评论 -
Mean Shift算法
Mean Shift算法,一般是指一个迭代的步骤,即先算出当前点的偏移均值,移动该点到其偏移均值,然后以此为新的起始点,继续移动,直到满足一定的条件结束. 1. Meanshift推导给定d维空间Rd的n个样本点 ,i=1,…,n,在空间中任选一点x,那么Mean Shift向量的基本形式定义为: Sk是一个转载 2017-08-05 19:59:08 · 290 阅读 · 0 评论 -
为什么caffe在训练时准确率很高,而用classification测试时准确率很低
预处理不一致造成的。在训练时, 网络应该使用的是pixel mean, 而在使用网络时,在classification.cpp中默认计算的是channel mean. 解决方式:在classification.cpp里的setMean方法计算的是图片每个channel的平均值,即channel_mean, 需要改成每个pixel 的平均值,即pixel_mean. 其实mean原创 2017-11-21 15:07:12 · 5028 阅读 · 0 评论 -
caffe forward_cpu
void ConvolutionLayer::Forward_cpu(const vector*>& bottom, const vector*>& top) { // blob_[0] 是 weight blob, blob_[1] 是 bias blob const Dtype* weight = this->blobs_[0]->cpu_转载 2017-11-14 16:16:57 · 677 阅读 · 0 评论 -
opencv 鼠标选择ROI
鼠标操作属于用户接口设计,以前一直使用Qt来做,但是如果只需要简单的鼠标,键盘操作,直接调用opencv库的函数也未尝不可,鼠标操作之前已经接触很多了,在MFC,QT,OpenGL,等等中,理论主要就是两点,一是监控鼠标操作,鼠标点击,移动,松开,然后通过mouse_event识别判断出那一种鼠标的操作,根据不同的操作然后进行处理,二是在主函数中加入鼠标的回调函数,将鼠标操作与程序的窗口绑定。转载 2017-12-07 15:57:56 · 540 阅读 · 0 评论 -
How a Kalman filter works, in pictures
I have to tell you about the Kalman filter, because what it does is pretty damn amazing.Surprisingly few software engineers and scientists seem to know about it, and that makes me sad because it i转载 2017-12-31 17:59:02 · 345 阅读 · 0 评论 -
Image Similarity Siamese Network
OverviewWith the kernel I am trying to run a simple test on using Siamese networks for similarity on a slightly more complicated problem than standard MNIST. The idea is to take a randomly initialized...转载 2018-03-27 20:56:49 · 738 阅读 · 0 评论 -
RANSAC算法详解
原文链接: http://grunt1223.iteye.com/blog/961063给定两个点p1与p2的坐标,确定这两点所构成的直线,要求对于输入的任意点p3,都可以判断它是否在该直线上。初中解析几何知识告诉我们,判断一个点在直线上,只需其与直线上任意两点点斜率都相同即可。实际操作当中,往往会先根据已知的两点算出直线的表达式(点斜式、截距式等等),然后通过向量计算即可方便地判转载 2017-07-26 13:55:22 · 231 阅读 · 0 评论 -
Fitting ellipses using RANSAC
Fitting ellipses using RANSAC algorithmBasic knowledge about ellipse equation http://mathworld.wolfram.com/Ellipse.htmlIn this tutorial, I am going to use the general quadratic curve which c转载 2017-07-26 13:06:11 · 638 阅读 · 1 评论 -
numpy -- ndarray
NumPy提供了两种基本的对象:ndarray(N-dimensional array object)和 ufunc(universal function object)。ndarray(下文统一称之为数组)是存储单一数据类型的多维数组,而ufunc则是能够对数组进行处理的函数。1.创建1.1.创建时指定元素类型import numpy as npa =转载 2017-07-19 16:06:41 · 509 阅读 · 0 评论 -
batch size, mini-batch, iterations and epoch
Gradient descent is an iterative algorithm which computes the gradient of a function and uses it to update the parameters of the function in order to find a maximum or minimum value of the function.转载 2017-07-14 14:17:12 · 354 阅读 · 0 评论 -
Histogram of Oriented Gradients
Histogram of Oriented GradientsDECEMBER 6, 2016 BY SATYA MALLICKIn this post, we will learn the details of the Histogram of Oriented Gradients (HOG) feature descriptor. We will learn wha转载 2017-07-12 18:51:24 · 857 阅读 · 0 评论 -
Linear Kernel: Why is it recommended for text classification ?
The Support Vector Machine can be viewed as a kernel machine. As a result, you can change its behavior by using a different kernel function.The most popular kernel functions are :the linear ke转载 2017-07-14 14:38:13 · 331 阅读 · 0 评论 -
Logisitc Regrssion vs linear SVM
Given a binary classification problem, the goal is to find the “best” line that has the maximum probability of classifying unseen points correctly. How you define this notion of “best” gives you dif转载 2017-07-14 14:34:46 · 396 阅读 · 0 评论 -
Different types of SVM -- opencv
They are different formulations of SVM. At the heart of SVM is an mathematical optimization problem. This problem can be stated in different ways.C-SVM uses C as the tradeoff parameter between the转载 2017-07-17 01:01:39 · 231 阅读 · 0 评论 -
gradient descent vs (mini-batch) stochastic gradient descent
In order to explain the differences between alternative approaches to estimating the parameters of a model, let's take a look at a concrete example: Ordinary Least Squares (OLS) Linear Regression. T转载 2017-07-29 16:13:16 · 401 阅读 · 0 评论 -
最小二乘法的矩阵解法和梯度下降法的区别在哪里
回答一:相同1.本质相同:两种方法都是在给定已知数据(independent & dependent variables)的前提下对dependent variables算出出一个一般性的估值函数。然后对给定新数据的dependent variables进行估算。2.目标相同:都是在已知数据的框架内,使得估算值与实际值的总平方差尽量更小(事实上未必一定要使用平方),估算值与转载 2017-07-29 19:13:35 · 1403 阅读 · 0 评论 -
List object in Python
Lists in Python language can be compared to arrays in Java but they are different in many other aspects. Lists are used in almost every program written in Python. In this tutorial we will understand P转载 2017-07-21 12:38:31 · 380 阅读 · 0 评论 -
HOGDescriptor with SVM
Using HOGDescriptor with SVMI am working on Traffic Sign Recognition (TSR) and using a SVM with HOG features for the detection step. This post will show how to use the HOGDescriptor with a (2-c转载 2017-07-17 18:32:17 · 598 阅读 · 0 评论 -
The effect of parameter class_weight on linear SVM classifier
Best way to handle unbalanced dataset with SVMI'm trying to build a prediction model with SVMs on fairly unbalanced data. My labels/output have two classes, positive, and negative. I would say t原创 2017-07-17 19:28:06 · 546 阅读 · 0 评论 -
方向梯度直方图(HOG)
1.介绍HOG(Histogram of Oriented Gradient)是2005年CVPR会议上,法国国家计算机科学及自动控制研究所的Dalal等人提出的一种解决人体目标检测的图像描述子,该方法使用梯度方向直方图(Histogram of Oriented Gradients,简称HOG)特征来表达人体,提取人体的外形信息和运动信息,形成丰富的特征集。2.生成过转载 2017-07-12 19:18:13 · 543 阅读 · 0 评论