多标签分类

多标签分类(multilabel classification )

多标签分类(multilabel classification )

多标签分类问题

caffe 每个样本对应多个label? 

使用caffe训练一个多标签分类/回归模型

使用caffe fine-tune一个单标签图像分类模型

基于caffe的googlenet的多标签检测

https://github.com/Numeria-Jun/multi-labels-googLeNet-caffe

基于Inception v3进行多标签训练

生成hdf5文件用于多标签训练

Caffe hdf5 layer data 大于2G 的导入

caffe hdf5 数据层数据生成

caffe学习笔记(11):多任务学习之HDF5Data类型数据集生成

Caffe中HDF5Data例子

Caffe中LMDB接口实现多标签数据准备及训练

caffe 实现多标签输入(multilabel、multitask)

caffe实现多label输入(修改源码版)

Caffe 源码的修改(用于车辆的定位)

实战caffe多标签分类——汽车品牌与车辆外观(C++接口)[详细实现+数据集]

更改caffe使得其接收多标签输入,并在网络层中使用

caffe多任务学习之多标签分类

caffe 更改源码,使得输入数据为多个标签

Caffe 源码添加多标签


Caffe_MultiLabel_Classification

支持多label的caffe,有比较好的实现吗?

Multilabel classification on PASCAL using python data-layers

Sigmoid Cross Entropy Loss

extreme classification

多标签(multi-label)数据的学习问题,常用的分类器或者分类策略有哪些?


keras实践(一): multi-label神经网络


CNN: single-label to multi-label总结

CNN: Single-label to Multi-label

论文笔记 | CNN-RNN:A Unified Framework for Multi-label Image Classification

多标签图像分类--HCP: A Flexible CNN Framework for Multi-Label Image Classification



https://github.com/sebastian-lapuschkin/lrp_toolbox/blob/master/caffe-master-lrp/src/caffe/layers/hinge_loss_multilabel_layer.cpp

https://github.com/qibinchuan/caffe-multilabel-float/blob/master/README.qbc

https://github.com/BVLC/caffe/wiki/Model-Zoo#pascal-voc-2012-multilabel-classification-model

https://github.com/kevinlu1211/Kaggle-Iceberg-Classifier


评价指标


多标签分类的评价指标

多标签分类器准确性评估方法的研究 秦锋


https://github.com/HolidayXue/CodeSnap/blob/master/convert_multilabel.txt

训练1024维特征的数据集合,multi-label, 转成多个1的onehot的label, 2个全连接。很直接的顺序编程,练习、便于理解


import os  
import tensorflow as tf  
from tensorflow import gfile  
from tensorflow import logging  
from tensorflow import flags  
from tensorflow import app  
import numpy as np  
# Parameters  
learning_rate = 0.001  
training_epochs = 5  
batch_size = 512  
display_step = 2  
  
# Network Parameters  
n_hidden_1 = 256 # 1st layer number of features  
n_hidden_2 = 128 # 2nd layer number of features  
n_input = 1024 #  
n_classes = 4716   
  
# tf Graph input  
x = tf.placeholder("float", [None, n_input])  
y = tf.placeholder("float", [None, n_classes])  
  
  
# Create model  
def multilayer_perceptron(x, weights, biases):  
    # Hidden layer with RELU activation  
    input_dim = len(x.get_shape()) - 1  
    model_input = tf.nn.l2_normalize(x, input_dim)  
    layer_1 = tf.add(tf.matmul(model_input, weights['h1']), biases['b1'])  
    layer_1 = tf.nn.relu(layer_1)  
    # Hidden layer with RELU activation  
    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])  
    layer_2 = tf.nn.relu(layer_2)  
    layer_2 = tf.nn.dropout(layer_2, 0.5)  
    # Output layer with linear activation  
    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']  
    return out_layer  
  
# Store layers weight & bias  
weights = {  
    'h1': tf.get_variable("wh1", shape=[n_input, n_hidden_1],  
           initializer=tf.contrib.layers.xavier_initializer()),  
  
    'h2': tf.get_variable("wh2", shape=[n_hidden_1, n_hidden_2],  
           initializer=tf.contrib.layers.xavier_initializer()),  
  
    'out': tf.get_variable("wout", shape=[n_hidden_2, n_classes],  
           initializer=tf.contrib.layers.xavier_initializer())  
}  
biases = {  
    'b1': tf.Variable(tf.zeros([n_hidden_1])),  
    'b2': tf.Variable(tf.zeros([n_hidden_2])),  
    'out': tf.Variable(tf.zeros([n_classes]))  
}  
  
  
# Construct model  
pred = multilayer_perceptron(x, weights, biases)  
  
# Define loss and optimizer  
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred, labels=y))  
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)  
  
# Initializing the variables  
init = tf.global_variables_initializer()  
video_level_record_path="./video_level/train/" #NEED CHANGE  
saver = tf.train.Saver()  
summ = tf.Summary()  
# Launch the graph  
checkpoint_dir="./ckp" #NEED CHANGE  
with tf.Session() as sess:  
    writer = tf.summary.FileWriter(checkpoint_dir, sess.graph)  
    if os.path.exists (checkpoint_dir):  
        ckpt = tf.train.get_checkpoint_state(checkpoint_dir)  
        saver.restore(sess, ckpt.model_checkpoint_path)   
    else:   
        sess.run(init)  
    # Training cycle  
    avg_cost = 0.  
    i = 0  
    total = 0  
    batches = 0  
    binary_labels_list = np.array([])  
    data_list = np.array([])  
    for epoch in range(training_epochs):  
        for video_level_record in os.listdir(video_level_record_path):  
            video_level_record = os.path.join (video_level_record_path, video_level_record)  
            for example in tf.python_io.tf_record_iterator(video_level_record):  
                if  i < batch_size:  
                    tf_example = tf.train.Example.FromString(example)  
                    #print (i, tf_example.features.feature['video_id'].bytes_list.value[0].decode(encoding='UTF-8'))  
                    labels = tf_example.features.feature['labels'].int64_list.value  
                    binary_labels = np.asarray([0] * n_classes)  
            for label in labels:  
                binary_labels[label] = 1  
            binary_labels = binary_labels.reshape(1, n_classes)  
            binary_labels = np.asarray(binary_labels)  
            data = np.asarray(tf_example.features.feature['mean_rgb'].float_list.value).reshape(1, n_input)  
            if len(binary_labels_list) == 0:   
                binary_labels_list = binary_labels  
            binary_labels_list = np.vstack([binary_labels_list, binary_labels])  
            if len(data_list) == 0:  
                data_list = data  
            data_list = np.vstack([data_list, data])  
            i += 1  
            total += 1  
        else:  
            if len(data_list) > 0 and len(binary_labels_list) > 0:  
                        # Run optimization op (backprop) and cost op (to get loss value)  
                        _, c = sess.run([optimizer, cost], feed_dict={x: data_list,  
                                                      y: binary_labels_list})  
                        avg_cost = c   
            i = 0  
            binary_labels_list = np.array([])  
            data_list = np.array([])  
            batches += 1  
                    if batches % display_step == 0:  
                        print("Epoch:", '%04d' % (epoch+1), \  
                              "Iter:", "%d" % (batches), \  
                              "cost=", str(avg_cost))  
                        summ.value.add(tag="cost", simple_value = avg_cost)  
                        writer.add_summary (summ, batches)  
                    if batches % 100 == 0:  
                        save_path = saver.save(sess, checkpoint_dir + "/model.ckpt-" + str(batches))  
                        print "saved", save_path  
    print("Optimization Finished!")  
  
    # Test model  
    #correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))  
    # Calculate accuracy  
    #accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))  
    #print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))  



  • 4
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值