卡尔曼滤波器的理解,C代码实现,和opencv里面KalmanFilter 的使用

现实可以使用的博客:Kalman滤波器的历史渊源

其中截取的可以用的:

 

背景:

卡尔曼滤波是一种高效率的递归滤波器(自回归滤波器), 它能够从一系列的不完全及包含噪声测量中,估计动态系统的状态。卡尔曼滤波的一个典型实例是从一组有限的,包含噪声的,对物体位置的观察序列(可能有偏差)预测出物体的位置的坐标速度

这种滤波方法以它的发明者鲁道夫.E.卡尔曼(Rudolph E. Kalman)命名,但是根据文献可知实际上Peter Swerling在更早之前就提出了一种类似的算法。

斯坦利.施密特(Stanley Schmidt)首次实现了卡尔曼滤波器。卡尔曼在NASA埃姆斯研究中心访问时,发现他的方法对于解决阿波罗计划的轨道预测很有用,后来阿波罗飞船的导航电脑便使用了这种滤波器。 关于这种滤波器的论文由Swerling (1958)、Kalman (1960)与 Kalman and Bucy (1961)发表。

目前,卡尔曼滤波已经有很多不同的实现.卡尔曼最初提出的形式现在一般称为简单卡尔曼滤波器。除此以外,还有施密特扩展滤波器、信息滤波器以及很多Bierman, Thornton 开发的平方根滤波器的变种。也许最常见的卡尔曼滤波器是锁相环,它在收音机、计算机和几乎任何视频或通讯设备中广泛存在。

 

从牛顿到卡尔曼

 

从现在开始,就要进行Kalman滤波器探讨之旅了,我们先回到高一,从物理中小车的匀加速直线运动开始。

话说,有一辆质量为m的小车,受恒定的力F,沿着r方向做匀加速直线运动。已知小车在t-ΔT时刻的位移是s(t-1),此时的速度为v(t-1)。求:t时刻的位移是s(t),速度为v(t)?

由牛顿第二定律,求得加速度:

那么就有下面的位移和速度关系:

 

如果将上面的表达式用矩阵写在一起,就变成下面这样:

卡尔曼滤波器是建立在动态过程之上,由于物理量(位移,速度)的不可突变特性,这样就可以通过t-1时刻估计(预测)t时刻的状态,其状态空间模型为:

对比一下(1)(2)式,长得及其相似有木有:

匀加速直线运动过程就是卡尔曼滤波中状态空间模型的一个典型应用。下面我们重点关注(2)式,鉴于研究的计算机信号都是离散的,将(2)是表示成离散形式为:

其中各个量之间的含义是:

  1. x(n)是状态向量,包含了观测的目标(如:位移、速度)
  2. u(n)是驱动输入向量,如上面的运动过程是通过受力驱动产生加速度,所以u(n)和受力有关
  3. A是状态转移矩阵,其隐含指示了“n-1时刻的状态会影响到n时刻的状态(这似乎和马尔可夫过程有些类似)”
  4. B是控制输入矩阵,其隐含指示了“n时刻给的驱动如何影响n时刻的状态”

    从运动的角度,很容易理解:小车当前n时刻的位移和速度一部分来自于n-1时刻的惯性作用,这通过Ax(n)来度量,另一部分来自于现在n时刻小车新增加的外部受力,通过Bu(n)来度量。

  5. w(n)是过程噪声,w(n)~N(0,Q)的高斯分布,过程噪声是使用卡尔曼滤波器时一个重要的量,后面会进行分析。

计算n时刻的位移,还有一种方法:拿一把长的卷尺(嗯,如果小车跑了很长时间,估计这把卷尺就难买到了),从起点一拉,直接就出来了,设测量值为z(n)。计算速度呢?速度传感器往那一用就出来了。

然而,初中物理就告诉我们,“尺子是量不准的,物体的物理真实值无法获得”,测量存在误差,我们暂且将这个误差记为v(n)。这种通过直接测量的方式获得所需物理量的值构成观测空间:

z(n)就是测量结果,H(n)是观测矢量,x(n)就是要求的物理量(位移、速度),v(n)~N(0,R)为测量噪声,同状态空间方程中的过程噪声一样,这也是一个后面要讨论的量。大部分情况下,如果物理量能直接通过传感器测量,

 

现在就有了两种方法(如上图)可以得到n时刻的位移和速度:一种就是通过(3)式的状态空间递推计算(Prediction),另一种就是通过(4)式直接拿尺子和传感器测量(Measurement)。致命的是没一个是精确无误的,就像上图看到的一样,分别都存在0均值高斯分布的误差w(n)和v(n)。

那么,我最终的结果是取尺子量出来的好呢,还是根据我们伟大的牛顿第二定律推导出来的好呢?抑或两者都不是!

 

 

 

如下图:估计量的高斯分布和测量量的高斯分布经过融合后为绿色的高斯分布曲线。

 

稍微计算一下,通过上式求出u和σ^2,

现在令

 

重点注意:

 

其中的K增益的感官认识就是:就是估计量的方差占总方差(包括估计方差和测量方差)的比重。

Kalman的参数中Q和R的设置非常重要,之前也提过,一般要通过实验统计得到,它们分布代表了状态空间估计的误差和测量的误差。

 

其中q和r参数尤为重要,一般得通过实验测试得到。

找两组声阵列测向的角度数据,对上面的C程序进行测试。一维Kalman(一维也是标量的情况,就我所知,现在网上看到的代码大都是使用标量的情况)和二维Kalman(一个状态是角度值,另一个状态是向量角度差,也就是角速度)的结果都在图中显示。这里再稍微提醒一下:状态量不要取那些能突变的量,如加速度,这点在文章“从牛顿到卡尔曼”一小节就提到过。

 

Matlab绘出的跟踪结果显示:

Kalman滤波结果比原信号更平滑。但是有椒盐突变噪声的地方,Kalman滤波器并不能滤除椒盐噪声的影响,也会跟踪椒盐噪声点。因此,推荐在Kalman滤波器之前先使用中值滤波算法去除椒盐突变点的影响。

 

 

 

下面是C程序的源代码:

 

 

 

#include "stdafx.h"
#include <opencv2/opencv.hpp>  
#include <opencv2/highgui/highgui.hpp>  
#include <iostream>  
#include <cassert>  
#include <cmath>  
#include <fstream>  

using namespace std;
using namespace cv;

// This video stablisation smooths the global trajectory using a sliding average window  
//const int SMOOTHING_RADIUS = 15; // In frames. The larger the more stable the video, but less reactive to sudden panning  
const int HORIZONTAL_BORDER_CROP = 20; // In pixels. Crops the border to reduce the black borders from stabilisation being too noticeable.  

// 1. Get previous to current frame transformation (dx, dy, da) for all frames  
// 2. Accumulate the transformations to get the image trajectory  
// 3. Smooth out the trajectory using an averaging window  
// 4. Generate new set of previous to current transform, such that the trajectory ends up being the same as the smoothed trajectory  
// 5. Apply the new transformation to the video  


struct Trajectory
{
	Trajectory() {}
	Trajectory(double _x, double _y, double _a) {
		x = _x;
		y = _y;
		a = _a;
	}
	// "+"  
	friend Trajectory operator+(const Trajectory &c1, const Trajectory  &c2){
		return Trajectory(c1.x + c2.x, c1.y + c2.y, c1.a + c2.a);
	}
	//"-"  
	friend Trajectory operator-(const Trajectory &c1, const Trajectory  &c2){
		return Trajectory(c1.x - c2.x, c1.y - c2.y, c1.a - c2.a);
	}
	//"*"  
	friend Trajectory operator*(const Trajectory &c1, const Trajectory  &c2){
		return Trajectory(c1.x*c2.x, c1.y*c2.y, c1.a*c2.a);
	}
	//"/"  
	friend Trajectory operator/(const Trajectory &c1, const Trajectory  &c2){
		return Trajectory(c1.x / c2.x, c1.y / c2.y, c1.a / c2.a);
	}
	//"="  
	Trajectory operator =(const Trajectory &rx){
		x = rx.x;
		y = rx.y;
		a = rx.a;
		return Trajectory(x, y, a);
	}

	double x;
	double y;
	double a; // angle  
};
//  
int main(int argc, char **argv)
{

	// For further analysis  
	ofstream out_transform("prev_to_cur_transformation.txt");
	ofstream out_trajectory("trajectory.txt");
	ofstream out_smoothed_trajectory("smoothed_trajectory.txt");
	ofstream out_new_transform("new_prev_to_cur_transformation.txt");

//	ofstream out_trajectory("trajectory.txt");

	VideoCapture cap("my.avi");
	assert(cap.isOpened());

	Mat cur, cur_grey;
	Mat prev, prev_grey;

		cap >> prev;//get the first frame.ch  
		prev.copyTo(cur);
	cvtColor(prev, prev_grey, COLOR_BGR2GRAY);

	// Step 1 - Get previous to current frame transformation (dx, dy, da) for all frames  
	// Accumulated frame to frame transform  
	double a = 0;
	double x = 0;
	double y = 0;
	// Step 2 - Accumulate the transformations to get the image trajectory  
	vector <Trajectory> trajectory; // trajectory at all frames  
	//  
	// Step 3 - Smooth out the trajectory using an averaging window  
	vector <Trajectory> smoothed_trajectory; // trajectory at all frames  
	Trajectory X;//posteriori state estimate  
	Trajectory  X_;//priori estimate  
	Trajectory P;// posteriori estimate error covariance  
	Trajectory P_;// priori estimate error covariance  
	Trajectory K;//gain  
	Trajectory  z;//actual measurement  
	double pstd = 4e-3;//can be changed  4e-3
	double cstd = 0.25;//can be changed  0.25
	/*他们被假设成高斯白噪声(White Gaussian Noise),他们的covariance 分别是Q,R
	(这里我们假设他们不随系统状态变化而变化)它们值的选取很重要, 
	一般要通过实验统计得到,它们分布代表了状态空间估计的误差和测量的误差。。	
	*/
	Trajectory Q(pstd, pstd, pstd);// process noise covariance  
	Trajectory R(cstd, cstd, cstd);// measurement noise covariance   
	// Step 4 - Generate new set of previous to current transform, such that the trajectory ends up being the same as the smoothed trajectory   
	// Step 5 - Apply the new transformation to the video  
	//cap.set(CV_CAP_PROP_POS_FRAMES, 0);  
	Mat T(2, 3, CV_64F); 
	//vert_border是图像稳像后大约要剪切的边缘大小,其是与HORIZONTAL_BORDER_CROP成比例的,其实可以随意
	int vert_border = HORIZONTAL_BORDER_CROP * prev.rows / prev.cols; // get the aspect ratio correct  
	//VideoWriter outputVideo("compare.avi", CV_FOURCC('M', 'P', '4', '2'), 20, cvSize(prev.rows, prev.cols), 1);
	IplImage*srcimg;
	srcimg = &IplImage(prev);
	CvVideoWriter*outputVideo;
	//这里的VideoWriter其实是可以用的,是自己定义图像cvSize(prev.rows, prev.cols)时值要与图像大小一致,否则制作的视频没用
	outputVideo = cvCreateVideoWriter("camera.avi", CV_FOURCC('M', 'J', 'P', 'G'), 20, cvGetSize(srcimg));
	int k = 1;
	//获得图像总帧数
	int max_frames = cap.get(CV_CAP_PROP_FRAME_COUNT);
	Mat last_T;
	Mat prev_grey_, cur_grey_;
	int framecoun = 0;
	while (cur.data != NULL) {
		cvtColor(cur, cur_grey, COLOR_BGR2GRAY);
         framecoun++;
		// vector from prev to cur  
		vector <Point2f> prev_corner, cur_corner;
		vector <Point2f> prev_corner2, cur_corner2;
		vector <uchar> status;
		vector <float> err;
		//要记得参数的意义,200代表检测角点的个数,0.01代表角点的质量,一般0.4,越大质量越好,30角点间的像素距离
		goodFeaturesToTrack(prev_grey, prev_corner, 200, 0.01, 30);
		calcOpticalFlowPyrLK(prev_grey, cur_grey, prev_corner, cur_corner, status, err);

		// weed out bad matches  
		for (size_t i = 0; i < status.size(); i++) {
			if (status[i]) {
				prev_corner2.push_back(prev_corner[i]);
				cur_corner2.push_back(cur_corner[i]);
			}
		}

		// translation + rotation only  其是使用RANsc算法,故有噪声点也能很好的选取代表点
		Mat T = estimateRigidTransform(prev_corner2, cur_corner2, false); // false = rigid transform, no scaling/shearing  
		// in rare cases no transform is found. We'll just use the last known good transform.  
		/*
		这里是为了防止程序崩掉,也可以用来在当检测的点个数达不到estimateRigidTransform要求的3个点时,
		进行使用上次的值,不至于崩掉
        */
		if (T.data == NULL) {
		last_T.copyTo(T);
		}
		T.copyTo(last_T); 
		// decompose T  
		double dx = T.at<double>(0, 2);
		double dy = T.at<double>(1, 2);
		double da = atan2(T.at<double>(1, 0), T.at<double>(0, 0));
		//prev_to_cur_transform.push_back(TransformParam(dx, dy, da));  
		out_transform << k << " " << dx << " " << dy << " " << da << endl;  
		// Accumulated frame to frame transform  
		x += dx;//这里是使用主运动来稳像
		y += dy;
		a += da;

		out_trajectory << k << " " << x << " " << y << " " << a << endl;

		z = Trajectory(x, y, a);
		//  
		if (k == 1){
			// intial guesses  
			X = Trajectory(0, 0, 0); //Initial estimate,  set 0  
			P = Trajectory(1, 1, 1); //set error variance,set 1  
		}
		else
		{
			//time update(prediction)
			//记住A矩阵代表着状态矢量与测量矢量之间的关系,一般一个测量矢量就要有两个状态矢量分别为:X,delt_X 
			//一般X与delt_X有关,delt_X与X无关,即构造矩阵时X的那一行有两个1,delt_X的那一行只有一个1.
			X_ = X; //X_(k) = X(k-1);  
			P_ = P + Q; //P_(k) = P(k-1)+Q;  
			// measurement update(correction)  
			K = P_ / (P_ + R); //gain;K(k) = P_(k)/( P_(k)+R );  
			X = X_ + K*(z - X_); //z-X_ is residual,X(k) = X_(k)+K(k)*(z(k)-X_(k));   
			P = (Trajectory(1, 1, 1) - K)*P_; //P(k) = (1-K(k))*P_(k);  
		}
		//smoothed_trajectory.push_back(X);  
		out_smoothed_trajectory << k << " " << X.x << " " << X.y << " " << X.a << endl;
		//-  
		// target - current  diff_x是估计的主运动与现实的主运动之间的差值,这里把其当做偏差值
		double diff_x = X.x - x;//  
		double diff_y = X.y - y;
		double diff_a = X.a - a;

		dx = dx + diff_x; //进行真正的运动矫正
		dy = dy + diff_y;
		da = da + diff_a;

		out_new_transform << k << " " << dx << " " << dy << " " << da << endl;
		//  
		T.at<double>(0, 0) = cos(da);
		T.at<double>(0, 1) = -sin(da);
		T.at<double>(1, 0) = sin(da);
		T.at<double>(1, 1) = cos(da);

		T.at<double>(0, 2) = dx;
		T.at<double>(1, 2) = dy;
		Mat cur2;

		warpAffine(prev, cur2, T, cur.size());
		//cur2 = cur2(Range(vert_border, cur2.rows - vert_border), Range(HORIZONTAL_BORDER_CROP, cur2.cols - HORIZONTAL_BORDER_CROP));

		// Resize cur2 back to cur size, for better side by side comparison  
		//resize(cur2, cur2, cur.size());
		srcimg = &IplImage(cur2);
		cvWriteFrame(outputVideo, srcimg);
		// Now draw the original and stablised side by side for coolness  
		//把图像现实在同一个画布上
		Mat canvas = Mat::zeros(cur.rows, cur.cols * 2 + 10, cur.type());
		prev.copyTo(canvas(Range::all(), Range(0, cur2.cols)));
		cur2.copyTo(canvas(Range::all(), Range(cur2.cols + 10, cur2.cols * 2 + 10)));

		// If too big to fit on the screen, then scale it down by 2, hopefully it'll fit :)  
		if (canvas.cols > 1920) {
			resize(canvas, canvas, Size(canvas.cols / 2, canvas.rows / 2));
		}	 
		imshow("before and after", canvas);
		waitKey(10);
		//  
		prev = cur.clone();//cur.copyTo(prev);  
		cur_grey.copyTo(prev_grey);
		cout << "Frame: " << k << "/" << max_frames << " - good optical flow: " << prev_corner2.size() << endl;
		k++;
		cap >> cur;
	}
	cvReleaseVideoWriter(&outputVideo);
	return 1;
}

 

 

 

 

 

 

 

 

 

下面的是opencv里的例程:

1个1维点的运动跟踪,虽然这个点是在2维平面中运动,但由于它是在一个圆弧上运动,

只有一个自由度,角度,所以还是1维的。还是一个匀速运动,建立匀速运动模型,设定状态变量x = [x1, x2] = [角度,角速度]

 

 

#include "opencv2/video/tracking.hpp"  
#include "opencv2/highgui/highgui.hpp"  
#include <iostream>  
#include <stdio.h>  
using namespace std;
using namespace cv;

//计算相对窗口的坐标值,因为坐标原点在左上角,所以sin前有个负号  
static inline Point calcPoint(Point2f center, double R, double angle)
{
	return center + Point2f((float)cos(angle), (float)-sin(angle))*(float)R;
}

static void help()
{
	printf("\nExamle of c calls to OpenCV's Kalman filter.\n"
		"   Tracking of rotating point.\n"
		"   Rotation speed is constant.\n"
		"   Both state and measurements vectors are 1D (a point angle),\n"
		"   Measurement is the real point angle + gaussian noise.\n"
		"   The real and the estimated points are connected with yellow line segment,\n"
		"   the real and the measured points are connected with red line segment.\n"
		"   (if Kalman filter works correctly,\n"
		"    the yellow segment should be shorter than the red one).\n"
		"\n"
		"   Pressing any key (except ESC) will reset the tracking with a different speed.\n"
		"   Pressing ESC will stop the program.\n"
		);
}

int main(int, char**)
{
	help();
	Mat img(500, 500, CV_8UC3);
	KalmanFilter KF(2, 1, 0);                                    //创建卡尔曼滤波器对象KF  
	Mat state(2, 1, CV_32F);                                     //state(角度,△角度)  
	Mat processNoise(2, 1, CV_32F);
	Mat measurement = Mat::zeros(1, 1, CV_32F);                 //定义测量值  
	char code = (char)-1;

	for (;;)
	{
		//1.初始化  
		randn(state, Scalar::all(0), Scalar::all(0.1));          //  
		KF.transitionMatrix = *(Mat_<float>(2, 2) << 1, 1, 0, 1);  //转移矩阵A[1,1;0,1]      


		//将下面几个矩阵设置为对角阵 (H)其一般设为1 
		setIdentity(KF.measurementMatrix);                             //测量矩阵H  
		// wk 是过程噪声,并假定其符合均值为零,协方差矩阵为Qk(Q)的多元正态分布;  
		setIdentity(KF.processNoiseCov, Scalar::all(1e-5));            //系统噪声方差矩阵Q  
		//vk 是观测噪声,其均值为零,协方差矩阵为Rk,且服从正态分布;  	
		setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));        //测量噪声方差矩阵R  
		setIdentity(KF.errorCovPost, Scalar::all(1));                  //后验错误估计协方差矩阵P  

		randn(KF.statePost, Scalar::all(0), Scalar::all(0.1));          //x(0)初始化  

		for (;;)
		{
			Point2f center(img.cols*0.5f, img.rows*0.5f);          //center图像中心点  
			float R = img.cols / 3.f;                                //半径  
			double stateAngle = state.at<float>(0);                //跟踪点角度  
			Point statePt = calcPoint(center, R, stateAngle);     //跟踪点坐标statePt  

			//2. 预测  
			Mat prediction = KF.predict();                       //计算预测值,返回x'  
			double predictAngle = prediction.at<float>(0);          //预测点的角度  
			Point predictPt = calcPoint(center, R, predictAngle);   //预测点坐标predictPt  


			//3.更新  
			//measurement是测量值  
			randn(measurement, Scalar::all(0), Scalar::all(KF.measurementNoiseCov.at<float>(0)));     //给measurement赋值N(0,R)的随机值  

			// generate measurement  
			measurement += KF.measurementMatrix*state;  //z = Vk + H*x;  

			double measAngle = measurement.at<float>(0);
			Point measPt = calcPoint(center, R, measAngle);

			// plot points  
			//定义了画十字的方法,值得学习下  
#define drawCross( center, color, d )                                 \  
			line(img, Point(center.x - d, center.y - d), \
				Point(center.x + d, center.y + d), color, 1, CV_AA, 0); \
				line(img, Point(center.x + d, center.y - d), \
				Point(center.x - d, center.y + d), color, 1, CV_AA, 0)

				img = Scalar::all(0);
			drawCross(statePt, Scalar(255, 255, 255), 3);
			drawCross(measPt, Scalar(0, 0, 255), 3);
			drawCross(predictPt, Scalar(0, 255, 0), 3);
			line(img, statePt, measPt, Scalar(0, 0, 255), 3, CV_AA, 0);
			line(img, statePt, predictPt, Scalar(0, 255, 255), 3, CV_AA, 0);


			//调用kalman这个类的correct方法得到加入观察值校正后的状态变量值矩阵  
			if (theRNG().uniform(0, 4) != 0)
				KF.correct(measurement);

			//不加噪声的话就是匀速圆周运动,加了点噪声类似匀速圆周运动,因为噪声的原因,运动方向可能会改变  
			randn(processNoise, Scalar(0), Scalar::all(sqrt(KF.processNoiseCov.at<float>(0, 0))));   //vk  
			/*这里的是为了上面的measurement做准备,因为其是使用了下图中的状态方程和测量方程一起搭配使用
			其中的measurement就是测量量,即Zk,*/
			state = KF.transitionMatrix*state + processNoise;

			imshow("Kalman", img);
			code = (char)waitKey(100);

			if (code > 0)
				break;
		}
		if (code == 27 || code == 'q' || code == 'Q')
			break;
	}

	return 0;
}

 

 

 

 

 

 

下面的是鼠标的跟踪:

 
<span style="font-size:18px;">#include "opencv2/video/tracking.hpp"  
#include "opencv2/highgui/highgui.hpp"  
#include <stdio.h>  
using namespace cv;  
using namespace std;  
  
const int winHeight=600;  
const int winWidth=800;  
  
  
Point mousePosition= Point(winWidth>>1,winHeight>>1);  
  
//mouse event callback  
void mouseEvent(int event, int x, int y, int flags, void *param )  
{  
    if (event==CV_EVENT_MOUSEMOVE) {  
        mousePosition = Point(x,y);  
    }  
}  
  
int main (void)  
{  
    RNG rng;  
    //1.kalman filter setup  
    const int stateNum=4;                                      //状态值4×1向量(x,y,△x,△y)  
    const int measureNum=2;                                    //测量值2×1向量(x,y)    
    KalmanFilter KF(stateNum, measureNum, 0);     
  
    KF.transitionMatrix = *(Mat_<float>(4, 4) <<1,0,1,0,0,1,0,1,0,0,1,0,0,0,0,1);  //转移矩阵A  
    setIdentity(KF.measurementMatrix);                                             //测量矩阵H  
    setIdentity(KF.processNoiseCov, Scalar::all(1e-5));                            //系统噪声方差矩阵Q  
    setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));                        //测量噪声方差矩阵R  
    setIdentity(KF.errorCovPost, Scalar::all(1));                                  //后验错误估计协方差矩阵P  
    rng.fill(KF.statePost,RNG::UNIFORM,0,winHeight>winWidth?winWidth:winHeight);   //初始状态值x(0)  
    Mat measurement = Mat::zeros(measureNum, 1, CV_32F);                           //初始测量值x'(0),因为后面要更新这个值,所以必须先定义  
      
    namedWindow("kalman");  
    setMouseCallback("kalman",mouseEvent);  
          
    Mat image(winHeight,winWidth,CV_8UC3,Scalar(0));  
  
    while (1)  
    {  
        //2.kalman prediction  
        Mat prediction = KF.predict();  
        Point predict_pt = Point(prediction.at<float>(0),prediction.at<float>(1) );   //预测值(x',y')  
  
        //3.update measurement  
        measurement.at<float>(0) = (float)mousePosition.x;  
        measurement.at<float>(1) = (float)mousePosition.y;          
  
        //4.update  
        KF.correct(measurement);  
  
        //draw   
        image.setTo(Scalar(255,255,255,0));  
        circle(image,predict_pt,5,Scalar(0,255,0),3);    //predicted point with green  
        circle(image,mousePosition,5,Scalar(255,0,0),3); //current position with red          
          
        char buf[256];  
        sprintf_s(buf,256,"predicted position:(%3d,%3d)",predict_pt.x,predict_pt.y);  
        putText(image,buf,Point(10,30),CV_FONT_HERSHEY_SCRIPT_COMPLEX,1,Scalar(0,0,0),1,8);  
        sprintf_s(buf,256,"current position :(%3d,%3d)",mousePosition.x,mousePosition.y);  
        putText(image,buf,cvPoint(10,60),CV_FONT_HERSHEY_SCRIPT_COMPLEX,1,Scalar(0,0,0),1,8);  
          
        imshow("kalman", image);  
        int key=waitKey(3);  
        if (key==27){//esc     
            break;     
        }         
    }  
}
</span>

 
 

上面的测量值和状态值一定是2倍的关系,即每一个测量值都有个▲的,当测量量为(x,y,angle),则状态量为(x,y,angle,▲x,▲y,▲angle)六个

参考博文为:

http://blog.csdn.net/GDFSG/article/details/50904811 圆周与鼠标使用opencv例程

http://blog.csdn.net/xiahouzuoxin/article/details/39582483 卡尔曼滤波器的感性理解

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
下面是一个基于OpenCV卡尔曼滤波器函数的示例代码,其中使用了一个简单的矩阵,可以通过修改矩阵来适应不同的问题。 ```c++ #include <opencv2/opencv.hpp> #include <iostream> using namespace cv; using namespace std; int main() { // 卡尔曼滤波器矩阵 // A: 状态转移矩阵 // B: 控制矩阵 // H: 观测矩阵 // Q: 系统噪声协方差矩阵 // R: 观测噪声协方差矩阵 Mat A = (Mat_<float>(2, 2) << 1, 0.1, 0, 1); Mat B = (Mat_<float>(2, 1) << 0.5, 1); Mat H = (Mat_<float>(1, 2) << 1, 0); Mat Q = (Mat_<float>(2, 2) << 0.1, 0, 0, 0.1); Mat R = (Mat_<float>(1, 1) << 1); // 初始状态向和协方差矩阵 Mat x = (Mat_<float>(2, 1) << 0, 0); Mat P = (Mat_<float>(2, 2) << 1, 0, 0, 1); // 观测数据集合 vector<float> measurements = { 1.2, 1.7, 2.4, 3.6, 4.9, 5.0 }; // 初始化卡尔曼滤波器 KalmanFilter kf(2, 1, 0); kf.transitionMatrix = A; kf.controlMatrix = B; kf.measurementMatrix = H; kf.processNoiseCov = Q; kf.measurementNoiseCov = R; kf.statePost = x; kf.errorCovPost = P; // 迭代滤波 for (int i = 0; i < measurements.size(); i++) { Mat measurement = (Mat_<float>(1, 1) << measurements[i]); Mat prediction = kf.predict(); Mat correction = kf.correct(measurement); cout << "Predicted: " << prediction << endl; cout << "Corrected: " << correction << endl; } return 0; } ``` 此代码可以读取一个观测数据序列,并使用卡尔曼滤波器对其进行处理。在每个迭代中,代码会进行预测和校正操作,并输出预测和校正结果。您可以根据需要修改代码中的矩阵来适应不同的问题。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值