First step on Fire Detection1

First step on Fire Detection

CV|The method about fire detection
Hello,guys.If you are interested in object detection in the video stream based on python,follow me.Let’s study about how to find the fire in your video.
1.Import the module cv2,numpy

import numpy as np
import cv2

2.Access the video we will process belowed.

cap = cv2.VideoCapture('test2.mp4')

In this step,you must remove or copy your video file to the same path as your python Project.

There is no doubt that the fire object are moving constantly in the video,so I use Background segmentation algorithm based on Gaussian Mixture Model which is one of the tradition target detection algorithms.Let’s do it.
3.Creat the background objection using the function cv2.createBackgroundSubtractorMOG2()

fgbg = cv2.createBackgroundSubtractorMOG2()

4.Enter our main program

while(1):
     ret, frame = cap.read()#read the video and save the information in frame
     fgmask = fgbg.apply(frame)#Get the foreground mask 
     kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))#The kernel helps to deal with the noise in the image.
     fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)#Morphological processing noise


So we have detected the fire successfully and the figure looks pretty with little noise,but if we don not use the function cv2.morphologyEx,Let’s see the result.
在这里插入图片描述
Apparently,the foreground mask picutre ‘frame’ is full of noise,.Therefore,whether to use the function dealing with noise in image or not have great effect on the quality of the picture.And there are some other function to process the noise ,including cv2.medianBlur,cv2.GaussianBlur,etc.

5.Identify the flames in the original image by find their contours.

     contours,hierarchy=cv2.findContours(fgmask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
     for c in contours:
         perimeter =cv2.arcLength(c,True)
         if perimeter >50:
             x,y,w,h =cv2.boundingRect(c)
             cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)

在这里插入图片描述
6.Show your video and quit the window if you press the button ‘Esc’ on your keyboard

     cv2.imshow('frame1',frame)
     cv2.imshow('frame',fgmask)
     k = cv2.waitKey(30) & 0xff
     if k == 27:
       break
cap.release()#Don't forget to release the computer space
cv2.destroyAllWindows()     

So far ,It seems that it is easy to detect the flames on the video.The little case above may help to activate the interest of the beginner .But there is still a lot of problems we need to deal with.
在这里插入图片描述
We can see that the number changing constantly are also detected by our model.So the first problem is that we need to filter out objects that are not flames in the foreground image.Then we can distinguish the difference between objections according to the feature.So does the flames and the figure.
In the following case,I just utilize the static picture to learn how to filter out the interference which has the same feature as flames in terms of RGB and HSI.You can refer to the following articles
https://blog.csdn.net/qq_27569955/article/details/51564887
https://blog.csdn.net/weixin_41987641/article/details/81812823?## ops_request_misc=%257B%2522request%255Fid%2522%253A%2522158409925919725222457728%2522%252C%2522scm%2522%253A%252220140713.130056874…%2522%257D&request_id=158409925919725222457728&biz_id=0&utm_source=distribute.pc_search_result.none-task
On the basis of the conclusion about the feature of flames in terms of RGB and HSI,only the pixels in the original picture which meet the constrains are setted 255,otherwise 0.

import cv2
import  numpy as np
def minnum(a,b,c):
    min=0
    if a<b:
        min=a
    elif b<c:
        min=b
    else:
        min=c
    return min
img = cv2.imread('fire.jpg')
redThre = 115
saturationTh = 5
minValue=0
height= img.shape[0]
width =img.shape[1]
S=np.zeros([height,width],np.uint8)
fireimg=np.zeros([height,width],np.uint8)
B = img[:, :, 0]
G = img[:, :, 1]
R = img[:, :, 2]
for i in range(height):
        for j  in range(width):
            minValue=minnum(B[i,j],G[i,j],R[i,j])
            S[i,j] = 1 - 3.0 * minValue / (R[i,j] + G[i,j] + B[i,j]+1 )
            if R[i,j]>redThre:
                if R[i,j]>G[i,j]:
                    if G[i,j]>B[i,j]:
                        if (S[i,j]>0.2):
                            if S[i,j]>=(255-R[i,j])/20:
                                if S[i,j] >= (255 - R[i,j]) * saturationTh / redThre:
                                    fireimg[i,j]=255
                                else:
                                    fireimg[i, j] = 0
                            else:
                                fireimg[i, j] = 0
                        else:
                            fireimg[i, j] = 0
                    else:
                        fireimg[i, j] = 0
                else:
                    fireimg[i, j] = 0
            else:
                fireimg[i, j] = 0
gray_fireImg = np.zeros([img.shape[0], img.shape[1], 1], np.uint8)
gray_fireImg[:,:,0]=fireimg
cv2.imshow('gray_fireimg',gray_fireImg)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

![在这里插入图片描述](https://img-blog.csdnimg.cn/20200313220327443.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80Mjk2MzQ4Ng==,size_16,color_FFFFFF,t_70It is easy for you to finish this work if you choose to traverse pixels using the for loop.But you can see that the codes are not concise at all and it will take you so much time to run the program which are not conducive to real-time object detection.Then I introduce a function np.where which can process the pixel matrix directly.

import cv2
import  numpy as np
img = cv2.imread('fire.jpg')
redThre = 115
saturationTh = 55
B = img[:, :, 0]
G = img[:, :, 1]
R = img[:, :, 2]
minValue = np.array(np.where(R <= G, np.where(G <= B, R, np.where(R <= B, R, B)), np.where(G <= B, G, B)))
S = 1 - 3.0 * minValue / (R + G + B+1 )
fireImg = np.array(np.where(R > redThre, np.where(R >= G, np.where(G >= B, np.where(S >= 0.2, np.where(S>=(255-R)/20, np.where( S >= (255 - R) * saturationTh / redThre, 255, 0),0), 0), 0), 0), 0))
gray_fireImg = np.zeros([fireImg.shape[0], fireImg.shape[1], 1], np.uint8)
gray_fireImg[:, :, 0] = fireImg
cv2.imshow('img',img)
cv2.imshow('gray_img',gray_fireImg)
cv2.waitKey(0)
cv2.destroyAllWindows()

在这里插入图片描述
The latter program takes little time than the for loop apparently.So try to learn the function np.where!
If you have a further observation towards the figure,you will find out lots of holes inside the flame.Therefore my filter function changed a bit as follows.I turn to use a block to traverse the figure,and the block size can be adjusted which will have geat influence on the speed of your program running.If the the number of pixels meeting the conditions of RGB and HSI inside the block are more than 1/4(or you can change it according to the result of the image) of the block,the value of these pixels inside the block will not be changed,otherwise that will be setted 0.Let’s do it.

def checkcolor(img1,redThre,saturationTh,img2):#传入原图和二值图,和RGB和HSI判据中用到的参数Rt,St
    height=img1.shape[0]
    width=img1.shape[1]
    fire = np.zeros([img1.shape[0], img1.shape[1], 3], dtype=img1.dtype)
    fire2= np.zeros([img2.shape[0], img2.shape[1]], dtype=img1.dtype)
    blocksize = 3
    win = np.zeros([blocksize, blocksize, 3], dtype=img1.dtype)
    zero = np.zeros([blocksize, blocksize, 3], dtype=img1.dtype)
    zero2 = np.zeros([blocksize, blocksize], dtype=img2.dtype)
    for i in range(int(height/blocksize)-1):
        for j in range(int(width/blocksize)-1):
            xaxis=i * blocksize
            yaxis=j*blocksize
            win = img1[xaxis:xaxis + blocksize, yaxis:yaxis + blocksize]
            win2 = img2[xaxis:xaxis + blocksize, yaxis:yaxis + blocksize]
            B = win[:, :, 0]
            G = win[:, :, 1]
            R = win[:, :, 2]
            minValue = np.array(np.where(R <= G, np.where(G <= B, R, np.where(R <= B, R, B)), np.where(G <= B, G, B)))
            S = 1 - 3.0 * minValue / (R + G + B + 1)
            # fireImg = np.array(np.where(R > redThre, np.where(R >= G, np.where(G >= B, 255, 0), 0), 0)),仅用RGB模型的判据
            # fireImg = np.array(np.where(R > redThre, np.where(R >= G, np.where(G >= B, np.where(S >= 0.2, np.where(S >= (255 - R) / 20, np.where(S >= (255 - R) * saturationTh / redThre, 255, 0), 0), 0), 0), 0), 0))RGB和HSI的另一种判据,附加了S >= (255 - R) / 20这个条件
            fireImg = np.array(np.where(R > redThre, np.where(R >= G, np.where(G >= B, np.where(S >= 0.2, np.where(S >= (255 - R) * saturationTh / redThre, 255, 0), 0), 0), 0), 0))
            if np.sum(fireImg) > 0:
                fire[xaxis:(xaxis+ blocksize), yaxis:(yaxis + blocksize)] = win
                fire2[xaxis:(xaxis+ blocksize), yaxis:(yaxis + blocksize)] =img2[xaxis:(xaxis+ blocksize), yaxis:(yaxis + blocksize)]
            else:
                fire[xaxis:(xaxis+ blocksize), yaxis:(yaxis + blocksize)] = zero
                fire2[xaxis:(xaxis + blocksize), yaxis:(yaxis + blocksize)]=zero2
    kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3  , 3))
    fire2 = cv2.morphologyEx(fire2, cv2.MORPH_CLOSE, kernel)
    fire=  cv2.morphologyEx(fire, cv2.MORPH_CLOSE, kernel)
    fire = cv2.dilate(fire, kernel, iterations=1)
    return fire,fire2

在这里插入图片描述
Actually,it did work.But there are still lots of work to improve my filter function.
Another pitcure:
在这里插入图片描述
After applying my filter model to detect the fire,I successfully filter out the sofa.But the floor lighted up by the flame are also be detected by my filter which reveals a problem that we can not identify a flame only by its RGB and HSI feature.And it still takes a bit of time to run the program which means we can not use this model in the video real-time fire detection.

That’s what I learn recently,and I will continue to solve the problems in fire detection in the future.
Thanks for your watching. Welcome to your advices!

written by Grant.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值