Feature Extraction based on LAB color model

Introduction to LAB color model

What is LAB?

Lab color model is composed of three elements, one is brightness , L. A and B are two color channels. A includes colours ranging from dark green (low brightness) to grey (medium brightness) to light pink (high brightness). B goes from bright blue (low brightness) to grey (medium brightness) to yellow (high brightness). Therefore, this color mix will produce a bright color. What’s more, it can make up for the lack of RGB color space. Thus, I will try to use LAB to extract the Hog feature of the fire.

How to use LAB in OpenCV?

In OpenCV, it provides an interface for converting different color spaces:

## BGR==>LAB ##
lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB)

RIO region division

Step 1: RGB color model is converted to LAB color model.

frame = cv2.imread('picture/fire_2.jpg')
frame = cv2.resize(frame, (400, 400))
frame = cv2.GaussianBlur(frame, (3, 3), 1)
lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB)

Step 2: Mask making.
Through experiments, we can obtain the mask area of the fire. In order to enhance the brightness difference of the image, we perform image morphology operation and high-pass filtering operation on the mask.

'''
L ==> (200, 255)
A ==> (120, 185)
B ==> (135, 255)
'''
l_m = np.array([200, 120, 135])
u_m = np.array([255, 185, 255])

mask = cv2.inRange(lab, l_m, u_m)

kernel1 = np.ones((15, 15), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel1)		# filter out non-rio areas
mask1 = cv2.GaussianBlur(mask, (3, 3), 0)
mask = mask - mask1		# enhance the difference in brightness

Step 3: Mapping RIO region.

res = cv2.bitwise_and(frame, frame, mask=mask)

img = frame.copy()
ret, thresh = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY)
binary, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
    l = cv2.arcLength(cnt, True)
    if l > 100:		# filter out non-conforming areas
        x, y, w, h = cv2.boundingRect(cnt)
        img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)

Result:
We tested three scenarios: one fire, more fire and daytime. We can see that the result of daytime is poorer, but the fire area can still be divided. Since the artificial neural network needs to be added in the later stage, the influence is relatively small.
在这里插入图片描述在这里插入图片描述在这里插入图片描述

HOG feature of LAB

Refer to the calculation method of Hog feature learned before(https://blog.csdn.net/qq_40776179/article/details/104992748), we can obtain three feature vectors about the fire. For example:
在这里插入图片描述在这里插入图片描述在这里插入图片描述

By synthesizing three sets of feature vectors, we can get the feature of flame. Therefore, LAB can give a more accurate description of the fire.

Conclusion

  • LAB color model has a clear boundary between color and brightness, which is helpful for fire recognition. But when using a camera to capture a fire, it typically reaches 255 brightness and stays near white, so the algorithm still needs to be improved.
  • Now I am debugging the neural network, I believe that it will have a better recognition effect.

Reference

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值