图像处理——运动检测

今天看到一个关于运动检测的算法,看看他的效果好像不错,有空研究一下

Motion Detector

Introduction

There are many approaches for motion detection in continuous video streams. All of them are based on comparison of current video frame with one from the previous frames or with something that we'll call background. In this article I'll try to describe some of the most common approaches.

In describing the algorithms, I'll use an image processing library I've described in my previous article. So, if you are common with it, it will only help.

The demo applications support the following types of video sources:

  • AVI files (using Video for Windows, interop library is included);
  • updating JPEG from internet cameras;
  • MJPEG (motion JPEG) streams from different internet cameras;
  • MMS Stream - Microsoft Media Services;
  • local capture device (USB cameras or other capture devices).

Algorithms

One of the most common approaches is to compare the current frame with the previous one. It's useful in video compression when you need to estimate changes and to write only the changes, not the whole frame. But it is not the best one for motion detection applications. So, let me describe the idea more closely.

Assume that we have an original 24 bpp RGB image called current frame (image), a grayscale copy of it (currentFrame) and a previous video frame also gray scaled (backgroundFrame). First of all let's find the regions where these two frames are differing a bit. For the purpose we can use Difference and Threshold filters.

// create filters
Difference differenceFilter = new Difference();
IFilter thresholdFilter = new Threshold(15, 255);
// set backgroud frame as an overlay for difference filter
differenceFilter.OverlayImage = backgroundFrame;
// apply the filters
Bitmap tmp1 = differenceFilter.Apply(currentFrame);
Bitmap tmp2 = thresholdFilter.Apply(tmp1);

On this step we'll get an image with white pixels on the place where the current frame is different from the previous frame on the specified threshold value. It's already possible to count the pixels, and if the amount of it is greater then a predefined alarm level we can signal about motion event.

But most cameras produce a noisy image, so we'll get motion in such places, where there is no motion at all. To remove random noisy pixels, we can use an Erosion filter, for example. So, we'll get now mostly only the regions where there was actual motion.

// create filter
IFilter erosionFilter = new Erosion();
// apply the filter 
Bitmap tmp3 = erosionFilter.Apply(tmp2);

The simplest motion detector is ready! We can highlight the motion regions if needed.

// extract red channel from the original image
IFilter extrachChannel = new ExtractChannel(RGB.R);
Bitmap redChannel = extrachChannel.Apply(image);
//  merge red channel with motion regions
Merge mergeFilter = new Merge();
mergeFilter.OverlayImage = tmp3;
Bitmap tmp4 = mergeFilter.Apply(redChannel);
// replace red channel in the original image
ReplaceChannel replaceChannel = new ReplaceChannel(RGB.R);
replaceChannel.ChannelImage = tmp4;
Bitmap tmp5 = replaceChannel.Apply(image);

Here is the result of it:

Simplest motion detector

From the above picture we can see the disadvantages of the approach. If the object is moving smoothly we'll receive small changes from frame to frame. So, it's impossible to get the whole moving object. Things become worse, when the object is moving so slowly, when the algorithms will not give any result at all.

There is another approach. It's possible to compare the current frame not with the previous one but with the first frame in the video sequence. So, if there were no objects in the initial frame, comparison of the current frame with the first one will give us the whole moving object independently of its motion speed. But, the approach have a big disadvantage - what will happen, if there was, for example, a car on the first frame, but then it is gone? Yes, we'll always have motion detected on the place, where the car was. Of course, we can renew the initial frame sometimes, but still it will not give us good results in the cases where we can not guarantee that the first frame will contain only static background. But, there can be the inverse situation. If I'll put a picture on the wall in the room? I’ll get motion detected until the initial frame will be renewed.

The most efficient algorithms are based on building so called backgrounds of the scene and comparing each current frame with the background. There are many approaches to build the scene, but most of them are too complex. I'll describe here my approach for building the background. It's rather simple and can be realized very quickly.

As in the previous case, let's assume that we have an original 24 bpp RGB image called current frame (image), a grayscale copy of it (currentFrame) and a background frame also gray scaled (backgroundFrame). At the beginning we get the first frame of the video sequence as the background frame. And then we'll always compare the current frame with the background one. But it will give us the result I’ve described above, which we obviously don't want very much. Our approach is to "move" the background frame to the current frame on the specified amount (I've used 1 level per frame). We move the background frame slightly in the direction of the current frame - we are changing colors of pixels in the background frame by one level per frame.

// create filter
MoveTowards moveTowardsFilter = new MoveTowards();
// move background towards current frame
moveTowardsFilter.OverlayImage = currentFrame;
Bitmap tmp = moveTowardsFilter.Apply(backgroundFrame);
// dispose old background
backgroundFrame.Dispose();
backgroundFrame = tmp;

And now, we can use the same approach we've used above. But, let me extend it slightly to get a more interesting result.

// create processing filters sequence
FiltersSequence processingFilter = new FiltersSequence();
processingFilter.Add(new Difference(backgroundFrame));
processingFilter.Add(new Threshold(15, 255));
processingFilter.Add(new Opening());
processingFilter.Add(new Edges());
// apply the filter
Bitmap tmp1 = processingFilter.Apply(currentFrame);

// extract red channel from the original image
IFilter extrachChannel = new ExtractChannel(RGB.R);
Bitmap redChannel = extrachChannel.Apply(image);
//  merge red channel with moving object borders
Merge mergeFilter = new Merge();
mergeFilter.OverlayImage = tmp1;
Bitmap tmp2 = mergeFilter.Apply(redChannel);
// replace red channel in the original image
ReplaceChannel replaceChannel = new ReplaceChannel(RGB.R);
replaceChannel.ChannelImage = tmp2;
Bitmap tmp3 = replaceChannel.Apply(image);

Motion detector - 2nd approach

Now it looks much better!

There is another approach based on the idea. As in the previous cases, we have an original frame, and a gray scaled version of it and of the background frame. But let's apply Pixellate filter to the current frame and to the background before further processing.

// create filter
IFilter pixellateFilter = new Pixellate();
// apply the filter
Bitmap newImage = pixellateFilter(image);

So, we have pixellated versions of the current and background frames. Now, we need to move the background frame towards the current frame as we were doing before. The next change is only in the main processing step:

// create processing filters sequence
FiltersSequence processingFilter = new FiltersSequence();
processingFilter.Add(new Difference(backgroundFrame));
processingFilter.Add(new Threshold(15, 255));
processingFilter.Add(new Dilatation());
processingFilter.Add(new Edges());
// apply the filter
Bitmap tmp1 = processingFilter.Apply(currentFrame);

After merging tmp1 image with the red channel of the original image, we'll get the next:

Motion detector - 3d approach

May be it looks not so perfect as the previous one, but the approach has the great possibility for performance optimization.

Conclusion

I've described only ideas here. To use these ideas in real applications, you need to optimize its realization. I've used an image processing library for simplicity, it's not a video processing library. Besides, the library allows me to research different areas more quickly, than to write optimized solutions from the beginning. A small sample of optimization can be found in the sources.

Andrew Kirillov


Click here to view Andrew Kirillov's online profile.

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值