【CNTK】CNTK框架下完整的vgg模型训练及保存

由于工作开发需要与c#衔接,而cntk又是微软旗下的,所以花了几个月学习了一下。在整个模型的设计过程中,由于是主考虑框架,所以对vgg实现原理等不做过多思考。考虑的主要是:
1.如何创建CNTK可读取的数据集
2.vgg模型的实现
3.整个训练实现过程

1.如何创建CNTK可读取的数据集
创建可读的数据集主要是将文件夹下的图像路径批量写入到txt文件夹中,根据自己分配,生成train.txt、validation.txt以及test.txt。
得到txt文件后,需要通过相关函数将其转换成MinibatchSource类。得到MinibatchSource类后(记为reader_train)则课通过reader_train.streams.features和reader_train.streams.labels来得到数据集的特征和标签。
具体代码如下:

def create_reader(map_file, mean_file, train):
#    print("Reading map file:", map_file)
#    print("Reading mean file:", mean_file)

#    if not os.path.exists(map_file) or not os.path.exists(mean_file):
#        raise RuntimeError("This tutorials depends 201A tutorials, please run 201A first.")

    # transformation pipeline for the features has jitter/crop only when training
    transforms = []#list
    # train uses data augmentation (translation only)
    if train:
        transforms += [
            xforms.crop(crop_type='randomside', side_ratio=0.8)
        ]
    transforms += [
        xforms.scale(width=image_width, height=image_height, channels=num_channels, interpolations='linear'),
#        xforms.mean(mean_file)
    ]
    # deserializer
    features = C.io.StreamDef(field='image', transforms=transforms)
    labels   = C.io.StreamDef(field='label', shape=num_classes) 
    streamdefs=C.io.StreamDefs(
        features=features , # first column in map file is referred to as 'image'
        labels=labels       # and second as 'label'
    )
    imagedeserialized=C.io.ImageDeserializer(map_file, streamdefs)
    batch_source=C.io.MinibatchSource(imagedeserialized)
    return batch_source

2.VGG模型的实现
VGG模型算是比较典型的分类模型
其实现过程如下:

def create_VGG16(input, out_dims):#VGG16
    with C.layers.default_options(init=C.glorot_uniform(), activation=C.relu):
      conv_64=C.layers.Convolution((3,3),64,pad=True)(input)
      conv_64=C.layers.Convolution((3,3),64,pad=True)(conv_64)
      conv_64=C.layers.BatchNormalization(map_rank=1)(conv_64)
#      conv_64=C.layers.BatchNormalization()(conv_64)
      conv_64=C.layers.MaxPooling((2,2),strides=2,pad=False)(conv_64)#64*112*112
      
      conv_128=C.layers.Convolution((3,3),128,pad=True)(conv_64)
      conv_128=C.layers.Convolution((3,3),128,pad=True)(conv_128)
      conv_128=C.layers.BatchNormalization(map_rank=1)(conv_128)
      conv_128=C.layers.MaxPooling((2,2),strides=2,pad=False)(conv_128)#128*56*56
      
      conv_256=C.layers.Convolution((3,3),256,pad=True)(conv_128)
      conv_256=C.layers.Convolution((3,3),256,pad=True)(conv_256)
      conv_256=C.layers.Convolution((3,3),256,pad=True)(conv_256)
      conv_256=C.layers.BatchNormalization(map_rank=1)(conv_256)
      conv_256=C.layers.MaxPooling((2,2),strides=2,pad=False)(conv_256)#256*28*28
      
      conv_512=C.layers.Convolution((3,3),512,pad=True)(conv_256)
      conv_512=C.layers.Convolution((3,3),512,pad=True)(conv_512)
      conv_512=C.layers.Convolution((3,3),512,pad=True)(conv_512)
      conv_512=C.layers.BatchNormalization(map_rank=1)(conv_512)
      conv_512=C.layers.MaxPooling((2,2),strides=2,pad=False)(conv_512)#512*14*14
      
      conv_512=C.layers.Convolution((3,3),512,pad=True)(conv_512)
      conv_512=C.layers.Convolution((3,3),512,pad=True)(conv_512)
      conv_512=C.layers.Convolution((3,3),512,pad=True)(conv_512)
      conv_512=C.layers.BatchNormalization(map_rank=1)(conv_512)
      conv_512=C.layers.MaxPooling((2,2),strides=2,pad=False)(conv_512)#512*7*7
      
      fc1 = C.layers.Dense(4096)(conv_512)#4096
      fc2 = C.layers.Dense(2048)(fc1)#2048
      fc3=C.layers.BatchNormalization(map_rank=1)(fc2)
      fc4 = C.layers.Dense(2,activation=None)(fc3)#2
    return fc4

定义好网络结构后,需要配置好一些超参数,具体代码块如下:

# training config
    epoch_size     = 37821
    minibatch_size = 64

    # Set training parameters
    lr_per_minibatch       = C.learning_parameter_schedule([0.01]*10 + [0.003]*10 + [0.001],
#    lr_per_minibatch       = C.learning_parameter_schedule(0.001,
                                                       epoch_size = epoch_size)
    momentums              = C.momentum_schedule(0.9, minibatch_size = minibatch_size)
    l2_reg_weight          = 0.001

    # trainer object
    learner = C.momentum_sgd(z.parameters,
                             lr = lr_per_minibatch,
                             momentum = momentums,
                             l2_regularization_weight=l2_reg_weight)
    progress_printer = C.logging.ProgressPrinter(tag='Training', num_epochs=50)
    trainer = C.Trainer(z, (ce, pe), [learner], [progress_printer])

3.整个训练实现过程
数据集准备好,模型设计好,超参数配置好后,我们需要进行训练。

    for epoch in range(max_epochs):       # loop over epochs
        sample_count = 0
        while sample_count < epoch_size:  # loop over minibatches in the epoch
            data = reader_train.next_minibatch(min(minibatch_size, epoch_size - sample_count),#问题在这
                                               input_map=input_map) # fetch minibatch.
            trainer.train_minibatch(data)                                   # update model with it
            sample_count += data[label_var].num_samples                     # count samples processed so far 
  
        trainer.summarize_training_progress() #输出阶段性训练结果
    return C.softmax(z)

每次送入minibatch_size个样本训练,一共迭代max_epochs次。最后得到的输出便是C.softmax(z)

cntk的模型可以保存为.model文件

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值