【Python】【机器学习】【机器视觉】低光线下的人物视频识别【附源码】【新加坡南洋理工大学】

25 篇文章 0 订阅
14 篇文章 0 订阅

正文

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

Python代码

# Import packages
import random  # generate random sequence
import numpy as np  # numbers and arrays
# import cv2  # opencv machine vision repository
import os  # file and folder management
# import mglearn  # machine learning repository
# from sklearn.svm import LinearSVC  # support vector clustering
# import seaborn as sns
# from sklearn.preprocessing import StandardScaler
# from sklearn.decomposition import PCA
# from matplotlib import pyplot as plt  # plotting
import argparse  # argument parsing
import keras  # DNN package
from keras.models import Sequential  # sequential model
from keras.layers import Dense, Conv2D  # layers
import tensorflow as tf
"""
keras
random
mglearn
plt
sns
StandardScaler
argparse
"""

"""
# Step 1: Frame Sampling
# Uniform Sampling
cap1 = cv2.VideoCapture('../EE6222 train and validate 2023/train/Sit/Sit_2_1.mp4')
# Count the total frames and sampled frames for comparison
frame_count = 0
sampled_count = 0
os.chdir('./Uniform')
while cap1.isOpened():
    ret, frame = cap1.read()
    if frame is not None:
        # print(frame[0].shape)
        frame_count += 1
        if frame_count % 4 == 1:
            cv2.imshow('frame', frame)
            cv2.imwrite('%d.jpg' % frame_count, img=frame)
            sampled_count += 1
    else:
        cap1.release()
cv2.destroyAllWindows()
print(frame_count)  # 58
print(sampled_count)  # 15
# Uniform Sampling

"""
"""
# Random Sampling
cap2 = cv2.VideoCapture('../EE6222 train and validate 2023/train/Sit/Sit_2_1.mp4')
os.chdir('./Random')
frame_count = 0
frame_set = np.zeros((58, 240, 320, 3), dtype=int)
while cap2.isOpened():
    ret, frame = cap2.read()
    if frame is not None:
        # print(frame[0].shape)
        frame_set[frame_count] = frame
        frame_count += 1
    else:
        cap2.release()

# Generate random sequence of 15 elements
sampled_frames = np.zeros((15, 240, 320, 3), dtype=int)
seq = np.zeros(15, dtype=int)
for i in range(0, 15):
    # print(i)
    seq[i] = random.randint(0, 58)
    sampled_frames[i] = frame_set[seq[i]]
    cv2.imshow('frame', sampled_frames[i])
    cv2.imwrite('%d.jpg' % i, img=sampled_frames[i])

cv2.destroyAllWindows()
"""  # Random Sampling 1
"""
# Random Sampling
cap2 = cv2.VideoCapture('../EE6222 train and validate 2023/train/Sit/Sit_2_1.mp4')
# Count the total frames and sampled frames for comparison
frame_count = 0
sampled_count = 0
os.chdir('./Random')
while cap2.isOpened():
    ret, frame = cap2.read()
    if frame is not None:
        # print(frame[0].shape)
        frame_count += 1
        if frame_count in [1, 4, 8, 12, 14, 20, 22, 29, 30, 42, 44, 49, 52, 54, 57]:
            cv2.imshow('frame', frame)
            cv2.imwrite('%d.jpg' % frame_count, img=frame)
            sampled_count += 1
    else:
        cap2.release()
cv2.destroyAllWindows()
print(frame_count)  # 58
print(sampled_count)  # 15
  # Random Sampling 2
"""
# feature_X =

"""
# Step 2: Feature Extraction
# PCA Method
model = PCA(n_components=50)  # pca model
model.fit(feature_X)  # train the pca model
# plot 1
plt.plot(model.explained_variance_ratio_, 'o-')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.title('PVE')
plt.show()
# plot 2
plt.plot(model.explained_variance_ratio_.cumsum(), 'o-')
plt.xlabel('Principal Component')
plt.ylabel('Cumulative Proportion of Variance Explained')
plt.axhline(0.9, color='k', linestyle='--', linewidth=1)
plt.title('CU-PVE')
plt.show()
"""



"""
# Step 3: Classifier Training and Evaluation
# SVM Classifier Training
train_X = feature_X  # Extracted training feature

train_y = np.zeros(150, dtype=int)  # Labels of the training data

for i in range(0, 25):
    train_y[i] = 0
for i in range(25, 50):
    train_y[i] = 1
for i in range(50, 75):
    train_y[i] = 2
for i in range(75, 100):
    train_y[i] = 3
for i in range(100, 125):
    train_y[i] = 4
for i in range(125, 150):
    train_y[i] = 5
print(train_y)
linear_svm = LinearSVC().fit(train_X, train_y)
w, b = linear_svm.coef_, linear_svm.intercept_
print(w, b)
"""

"""
# Step 4: Effects of Leveraging Image Enhancements
# Logarithmic enhancement function
def log(image):
    image_log = np.uint8(np.log(np.array(image) + 1))
    cv2.normalize(image_log, image_log, 0, 255, cv2.NORM_MINMAX)

    cv2.convertScaleAbs(image_log, image_log)
    return image_log


# Gamma enhancement function
def gamma(image):
    fgamma = 2
    image_gamma = np.uint8(np.power((np.array(image) / 255.0), fgamma) * 255.0)
    cv2.normalize(image_gamma, image_gamma, 0, 255, cv2.NORM_MINMAX)
    cv2.convertScaleAbs(image_gamma, image_gamma)
    return image_gamma


# Sample frames from video
cap_e = cv2.VideoCapture('../EE6222 train and validate 2023/train/Walk/Walk_10_1.mp4')
os.chdir('./Enhancement')
frame_count = 0
while cap_e.isOpened():
    ret, frame = cap_e.read()
    if frame is not None:
        frame_count += 1
        if frame_count % 10 == 1:
            cv2.imshow('frame', frame)
            cv2.imwrite('0%d before.jpg' % frame_count, img=frame)
            cv2.imwrite('%d after log.jpg' % frame_count, img=log(frame))
            cv2.imwrite('%d after gamma.jpg' % frame_count, img=gamma(frame))
    else:
        cap_e.release()
print(frame_count)  # 57
cv2.destroyAllWindows()
"""

"""
train_x =
train_y =
validate_x =
"""

# Step 5: Improving the HAR Model
model = Sequential()  # empty model
# add layers
model.add(Dense(units=128, activation='relu', input_dim=100))
model.add(Conv2D(units=128, activation='relu', input_dim=100))
model.add(Dense(units=64, activation='relu', input_dim=50))
model.add(Conv2D(units=64, activation='relu', input_dim=50))
model.add(Dense(units=32, activation='relu', input_dim=25))
model.add(Conv2D(units=32, activation='relu', input_dim=25))
model.add(Dense(units=10, activation='softmax'))
# model information
print(model.summary())
# compile the model
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
# train the model
# model.fit(train_x, train_y, epochs=50, batch_size=32)
# use model to predict validation data
# model.predict(validate_x, batch_size=32, verbose="true", workers=4)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

不是AI

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值