视频动作分类查阅资料记录
https://github.com/jinwchoi/awesome-action-recognition
记录了很多视频分类的论文与代码链接
kenshohara / 3D-ResNets-PyTorch
https://github.com/kenshohara/3D-ResNets-PyTorch
经典的 视频分类代码,包括了三个数据集的下载方式或链接
ActivityNet Kinetics - Downloader
https://github.com/activitynet/ActivityNet/tree/master/Crawler/Kinetics
HDRNet
https://groups.csail.mit.edu/graphics/hdrnet/data/hdrnet.pdf
deep_bilateral_network PyTorch
https://github.com/EricElmoznino/deep_bilateral_network
keras数据自动生成器,继承keras.utils.Sequence,结合fit_generator实现节约内存训练
#coding=utf-8
'''
Created on 2018-7-10
'''
import keras
import math
import os
import cv2
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
class DataGenerator(keras.utils.Sequence):
def __init__(self, datas, batch_size=1, shuffle=True):
self.batch_size = batch_size
self.datas = datas
self.indexes = np.arange(len(self.datas))
self.shuffle = shuffle
def __len__(self):
#计算每一个epoch的迭代次数
return math.ceil(len(self.datas) / float(self.batch_size))
def __getitem__(self, index):
#生成每个batch数据,这里就根据自己对数据的读取方式进行发挥了
# 生成batch_size个索引
batch_indexs = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# 根据索引获取datas集合中的数据
batch_datas = [self.datas[k] for k in batch_indexs]
# 生成数据
X, y = self.data_generation(batch_datas)
return X, y
def on_epoch_end(self):
#在每一次epoch结束是否需要进行一次随机,重新随机一下index
if self.shuffle == True:
np.random.shuffle(self.indexes)
def data_generation(self, batch_datas):
images = []
labels = []
# 生成数据
for i, data in enumerate(batch_datas):
#x_train数据
image = cv2.imread(data)
image = list(image)
images.append(image)
#y_train数据
right = data.rfind("\\",0)
left = data.rfind("\\",0,right)+1
class_name = data[left:right]
if class_name=="dog":
labels.append([0,1])
else:
labels.append([1,0])
#如果为多输出模型,Y的格式要变一下,外层list格式包裹numpy格式是list[numpy_out1,numpy_out2,numpy_out3]
return np.array(images), np.array(labels)
# 读取样本名称,然后根据样本名称去读取数据
class_num = 0
train_datas = []
for file in os.listdir("D:/xxx"):
file_path = os.path.join("D:/xxx", file)
if os.path.isdir(file_path):
class_num = class_num + 1
for sub_file in os.listdir(file_path):
train_datas.append(os.path.join(file_path, sub_file))
# 数据生成器
training_generator = DataGenerator(train_datas)
#构建网络
model = Sequential()
model.add(Dense(units=64, activation='relu', input_dim=784))
model.add(Dense(units=2, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(training_generator, epochs=50,max_queue_size=10,workers=1)
————————————————
版权声明:本文为CSDN博主「姚贤贤」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/u011311291/article/details/80991330