Directory
HSV color model
HSV is also been call HexconeModel. The parameters of this model are: Hue (H), Saturation (S), and Value (V). Through experiments, we can obtain the range of HSV about the fire characteristics:
HSV space transformation in opencv-python:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
"""
frame => Image matrix
"""
Compared with LAB color model, although LAB is more sensitive to brightness, HSV is more accurate in fire positioning. Of course, it is also possible that the number of experiments is less why has a bad result of the LAB model.
The flow chart
PS:Refer to reference[1]
Detect and locate fire and make CNN network training data
The steps in this section are mostly the same as in the previous one. The difference is:
- Remove the GaussianFilter. Because what we need is actually a detection of the area of the fire, we don’t need to figure out the outline of the flame.
- Use CNN to identify the fire. Because our flame positioning is based on HSV filtering and image morphology, false detection of non-fire targets is inevitable. Therefore, we used ANN to identify the target. Here, we use VGG to train it.
Code:
import cv2
import numpy as np
from keras.models import load_model
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
def make_prediction(image, model, class_dictionary):
img = image / 255.
# convert to 4D tensor
image = np.expand_dims(img, axis=0)
# train
class_predicted = model.predict(image)
inID = np.argmax(class_predicted[0])
label = class_dictionary[inID]
return label
def keras_model(weights_path):
model = load_model(weights_path)
return model
weights_path = 'fire1.h5'
# Define a dichotomous dictionary
class_dictionary = {}
class_dictionary[0] = 'fire'
class_dictionary[1] = 'not a fire'
model = keras_model(weights_path)
## And then the "#" which has code represents the video input ##
# cap = cv2.VideoCapture(1)
while True:
# _, frame = cap.read()
frame = cv2.imread('picture/exp/fire_6.jpg')
frame = cv2.resize(frame, (400, 400))
frame = cv2.GaussianBlur(frame, (3, 3), 1)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Mask making
l_m = np.array([0, 120, 200])
u_m = np.array([50, 250, 250])
mask = cv2.inRange(hsv, l_m, u_m)
# Image morphology operation
kernel1 = np.ones((15, 15), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel1)
res = cv2.bitwise_and(frame, frame, mask=mask)
img = frame.copy()
ret, thresh = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY)
binary, contours, hierarchy = cv2.findContours(
thresh,
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE
)
for cnt in contours:
l = cv2.arcLength(cnt, True)
if l > 50:
x, y, w, h = cv2.boundingRect(cnt)
# CNN data input
img_test = frame[y:y + h, x:x + w]
img_test = cv2.resize(img_test, (48, 48))
label = make_prediction(img_test, model, class_dictionary)
if label == 'fire':
img = cv2.rectangle(
img,
(x, y),
(x + w, y + h),
(0, 0, 255), 2)
cv2.putText(
img,
"Fire", (x, y),
cv2.FONT_HERSHEY_SIMPLEX,
0.7, (0, 0, 255), 2)
else:
img = cv2.rectangle(
img,
(x, y),
(x + w, y + h),
(0, 255, 0), 2)
cv2.putText(
img,
"Not a Fire", (x, y),
cv2.FONT_HERSHEY_SIMPLEX,
0.7, (0, 255, 0), 2)
cv2.imshow("img", img)
key = cv2.waitKey(1)
if key == 27:
break
# cap.release()
cv2.destroyAllWindows()
Result: Here we use the sodium lamp as a contrast. Of course, during the experiment, it was found that some smaller signal lights(yellow) and brighter objects could also be judged as flames. Therefore, both data sets and algorithms need to improve.
So what’s the effect of video? We did an experiment and the results were as follows. However, during the inspection, it is found that the effect varies with the fluctuation of the fire. The bigger the flame fluctuation is, the worse the detection effect is. Therefore, the processing of the images can be greatly improved in this algorithm.
Conclusion
- This paper is to use ANN to realize the fire recognition mentioned in the last paper. Although CNN is used in this paper, it is expected that RNN LSTM can be implemented, but the effect is not good in the experiment. In the future, I’m going to find a better way to solve it.
- Although my algorithm implementation is not perfect, I think this implementation process is a good way.
- The mathematical understanding of the algorithm is relatively important, especially in the field of AI. The algorithm described in the literature cannot be fully realized in this paper. The fundamental reason is that the knowledge of CNN is not fully understood, so the learning of neural network needs to be strengthened.
Reference
[1]李莹,李忠,李海洋,孙可可.结合颜色空间和CNN的火焰检测[J].计算机时代,2019(12):67-70.
LI Ying,LI Zhong,LI Haiyang,SUN Keke.Flame detecting with the combination of color space and CNN[J].Computer Era,2019(12):67-70.
[2]刘金利,张培玲.改进LeNet-5网络在图像分类中的应用[J].计算机工程与应用,2019,55(15):32-37+95.
LIU Jinli,ZHANG Peiling.Application of LeNet-5 neural network in image classification[J].Computer Engineering and Applications,2019,55(15):32-37.