尝试用caffe-python生成prototxt,模拟pytorch的方式,快速生成新模型

参考:

https://www.jianshu.com/p/1a420445deea

https://blog.csdn.net/wfei101/article/details/79595475

 

https://github.com/GeekLiB/caffe-model

 

https://github.com/weiliu89/caffe/blob/ssd/examples/ssd/ssd_pascal.py

https://github.com/yonghenglh6/DepthwiseConvolution/blob/master/transfer2Mobilenet.py

目标:实现MobileNet-v2 or v3

caffe-python都有哪些参数:

if bias=True:
        net.conv1_1 = L.Convolution(net[from_layer], num_output=out_channels, pad=padding, kernel_size=3, stride=stride, **kwargs_bias)
    else:
        net.conv1_1 = L.Convolution(net[from_layer], num_output=out_channels, pad=padding, kernel_size=3, stride=stride, **kwargs_nobias)

比如最简单的卷积

打开caffe_pb2.py

_CONVOLUTIONPARAMETER = _descriptor.Descriptor(
  name='ConvolutionParameter',
  full_name='caffe.ConvolutionParameter',
  filename=None,
  file=DESCRIPTOR,
  containing_type=None,
  fields=[
    _descriptor.FieldDescriptor(
      name='num_output', full_name='caffe.ConvolutionParameter.num_output', index=0,
      number=1, type=13, cpp_type=3, label=1,
      has_default_value=False, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='bias_term', full_name='caffe.ConvolutionParameter.bias_term', index=1,
      number=2, type=8, cpp_type=7, label=1,
      has_default_value=True, default_value=True,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='pad', full_name='caffe.ConvolutionParameter.pad', index=2,
      number=3, type=13, cpp_type=3, label=3,
      has_default_value=False, default_value=[],
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='kernel_size', full_name='caffe.ConvolutionParameter.kernel_size', index=3,
      number=4, type=13, cpp_type=3, label=3,
      has_default_value=False, default_value=[],
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='stride', full_name='caffe.ConvolutionParameter.stride', index=4,
      number=6, type=13, cpp_type=3, label=3,
      has_default_value=False, default_value=[],
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='dilation', full_name='caffe.ConvolutionParameter.dilation', index=5,
      number=18, type=13, cpp_type=3, label=3,
      has_default_value=False, default_value=[],
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='pad_h', full_name='caffe.ConvolutionParameter.pad_h', index=6,
      number=9, type=13, cpp_type=3, label=1,
      has_default_value=True, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='pad_w', full_name='caffe.ConvolutionParameter.pad_w', index=7,
      number=10, type=13, cpp_type=3, label=1,
      has_default_value=True, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='kernel_h', full_name='caffe.ConvolutionParameter.kernel_h', index=8,
      number=11, type=13, cpp_type=3, label=1,
      has_default_value=False, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='kernel_w', full_name='caffe.ConvolutionParameter.kernel_w', index=9,
      number=12, type=13, cpp_type=3, label=1,
      has_default_value=False, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='stride_h', full_name='caffe.ConvolutionParameter.stride_h', index=10,
      number=13, type=13, cpp_type=3, label=1,
      has_default_value=False, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='stride_w', full_name='caffe.ConvolutionParameter.stride_w', index=11,
      number=14, type=13, cpp_type=3, label=1,
      has_default_value=False, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='group', full_name='caffe.ConvolutionParameter.group', index=12,
      number=5, type=13, cpp_type=3, label=1,
      has_default_value=True, default_value=1,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='weight_filler', full_name='caffe.ConvolutionParameter.weight_filler', index=13,
      number=7, type=11, cpp_type=10, label=1,
      has_default_value=False, default_value=None,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='bias_filler', full_name='caffe.ConvolutionParameter.bias_filler', index=14,
      number=8, type=11, cpp_type=10, label=1,
      has_default_value=False, default_value=None,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='engine', full_name='caffe.ConvolutionParameter.engine', index=15,
      number=15, type=14, cpp_type=8, label=1,
      has_default_value=True, default_value=0,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='axis', full_name='caffe.ConvolutionParameter.axis', index=16,
      number=16, type=5, cpp_type=1, label=1,
      has_default_value=True, default_value=1,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
    _descriptor.FieldDescriptor(
      name='force_nd_im2col', full_name='caffe.ConvolutionParameter.force_nd_im2col', index=17,
      number=17, type=8, cpp_type=7, label=1,
      has_default_value=True, default_value=False,
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      options=None),
  ],
  extensions=[
  ],
  nested_types=[],
  enum_types=[
    _CONVOLUTIONPARAMETER_ENGINE,
  ],
  options=None,
  is_extendable=False,
  extension_ranges=[],
  oneofs=[
  ],
  serialized_start=9946,
  serialized_end=10454,
)
num_output

bias_term

pad

kernel_size

stride

dilation

pad_h

pad_w

kernel_h

kernel_w

stride_h

stride_w

group

weight_filler

bias_filler

engine

axis

force_nd_im2col

如果是自定义层:

打开caffe_pb2.py

比如:

149的Upsample

对应的就是:

然后可以通过:

  optional UpsampleParameter upsample_param = 149;

搜索对应的UpsampleParameter,有对该层参数的解析

首先给出别人写好的:

# -*- coding: utf-8 -*-
from __future__ import print_function

import os

import caffe
from caffe import layers as L
from caffe import params as P

from caffe.proto import caffe_pb2
from caffe import to_proto
from google.protobuf import text_format


path='./txt/'
train_list='./txt/train_list.txt'
val_list='./txt/test_list.txt'           
train_proto=path+'denseNet1.prototxt'   
deploy_proto=path+'deploy_denseNet1.prototxt'       


def net_block(input,kernel_size=3,num_output=32,stride=1,pad=1,MAX_POOL=False,BN=False,GROUP=False):
    conv = None
    if GROUP:
        conv=L.Convolution(input, kernel_size=kernel_size, stride=stride,num_output=num_output,group=num_output,engine=1,bias_term=False, pad=pad,weight_filler=dict(type='xavier'))
    else:
        conv=L.Convolution(input, kernel_size=kernel_size, stride=stride,num_output=num_output,bias_term=False, pad=pad,weight_filler=dict(type='xavier'))
    #conv_bn = L.BatchNorm(conv, use_global_stats=False, in_place=True)
    if MAX_POOL:
        maxpool1=L.Pooling(conv, pool=P.Pooling.MAX,stride=2,kernel_size=3)
        bn = L.BatchNorm(maxpool1, use_global_stats=False, in_place=True)
        #scale = L.Scale(bn,filler=1,bias_term=true,bias_filler=0)
        scale = L.Scale(bn,filler=1,bias_term=True,bias_filler=0)
        relu=L.ReLU(scale, in_place=True)
        return maxpool1,relu
    else:
        if BN:
            bn = L.BatchNorm(conv, use_global_stats=False, in_place=True)
            scale = L.Scale(bn,filler=1,bias_term=True,bias_filler=0)
            relu=L.ReLU(scale, in_place=True) 
        else:
            relu=L.ReLU(conv, in_place=True)     
    #relu=L.ReLU(bn, in_place=True)
        return relu

def eltwise_relu(bottom1, bottom2):
    residual_eltwise = L.Eltwise(bottom1, bottom2, eltwise_param=dict(operation=1))
    residual_eltwise_relu = L.ReLU(residual_eltwise, in_place=True)

    return residual_eltwise_relu

def mobile_modeule(input, num_3x3,num_1x1,stride=1):
    #engine默认为0,1对应CAFFE,2对应CUDNN
    model3=L.Convolution(input, kernel_size=3, stride=stride,num_output=num_3x3,group=num_3x3,bias_term=False, pad=1,engine=1,weight_filler=dict(type='xavier'))
    model1=L.Convolution(model3, kernel_size=1, stride=1,num_output=num_1x1,bias_term=False, pad=0,weight_filler=dict(type='xavier'))
    bn = L.BatchNorm(model1, use_global_stats=False, in_place=True)
    relu=L.ReLU(bn,in_place=True)

    return relu   

def group_conv(input,num_output,GROUP=False,kernel_size=3,stride=1,pad=1):
    engine=0
    group=1
    if GROUP:
        engine=1
        group=num_output
    conv=L.Convolution(input, kernel_size=kernel_size, stride=stride,num_output=num_output,group=group,bias_term=False, pad=pad,engine=engine,weight_filler=dict(type='xavier'))
    return conv

def after_conv(conv):
    #in-place compute means your input and output has the same memory area,which will be more memory effienct
    bn = L.BatchNorm(conv, use_global_stats=False,in_place=False)
    #scale = L.Scale(bn,filler=dict(value=1),bias_filler=dict(value=0),bias_term=True, in_place=True)
    scale = L.Scale(bn,bias_term=True, in_place=True)
    relu=L.ReLU(scale, in_place=True)
    return relu

def res_block(input,stride=2,num_output=32,pad1=1,pad2=1,MAX_POOL=False):
    block1 = net_block(input=input,kernel_size=3,num_output=num_output,stride=stride,pad=pad1)
    block2 = net_block(input=block1,kernel_size=3,num_output=num_output,stride=1,pad=pad2)
    #block3 = net_block(input=block2,kernel_size=3,num_output=num_output,stride=1,pad=pad2)
    #block4 = eltwise_relu(block1,block2)
    residual_eltwise = L.Eltwise(block1, block2, eltwise_param=dict(operation=1))
    if MAX_POOL:
        maxpool1=L.Pooling(residual_eltwise, pool=P.Pooling.MAX,stride=2,kernel_size=3)
        bn = L.BatchNorm(maxpool1, use_global_stats=False, in_place=True)
        relu=L.ReLU(bn, in_place=True)
    else:
        bn = L.BatchNorm(residual_eltwise, use_global_stats=False, in_place=True)
        relu=L.ReLU(bn, in_place=True)        
    return relu

def concat_res_block(input):
    blockA1 = net_block(input=input,kernel_size=3,num_output=4,stride=2,pad=0)
    blockA2 = net_block(input=blockA1,kernel_size=3,num_output=4,stride=1,pad=0)
    blockA3 = eltwise_relu(blockA1,blockA2)

    blockB1 = net_block(input=input,kernel_size=3,num_output=4,stride=2,pad=0)
    blockB2 = net_block(input=blockB1,kernel_size=3,num_output=4,stride=1,pad=0)
    blockB3 = eltwise_relu(blockB1,blockB2)

    concatAB = L.Concat(blockA3, blockB3)    

'''def create_net(img_list,batch_size,include_acc=False):
    data,label=L.ImageData(source=img_list,batch_size=batch_size,shuffle=true,new_width=120,new_height=120,ntop=2,
                           transform_param=dict(crop_size=112,mirror=False,scale=0.0078125,mean_value=127.5))'''
def create_net(train_list,batch_size,include_acc=False):
    spec = caffe.NetSpec()   
    '''
    NetSpec可以用作命名,下面每一个spec.后面的字符直接就作为了该层的名字,没有使用spec的,系统会自动生成,在函数中的命名就是自动生成的,因为无法传递spec
    spec.data,spec.label=L.ImageData(source=train_list,batch_size=batch_size,shuffle=True,ntop=2, 
                                    transform_param=dict(crop_size=112,mirror=False,scale=0.0078125,mean_value=127.5),
                                    phase=0) '''
    spec.data,spec.label=L.ImageData(source=train_list,batch_size=batch_size,shuffle=True,ntop=2,
                           transform_param=dict(crop_size=112,mirror=False,scale=0.0078125,mean_value=127.5),include=dict(phase=caffe.TRAIN))

    spec.conv1=group_conv(spec.data,kernel_size=3,num_output=32,stride=2)
    spec.relu1=after_conv(spec.conv1)
    spec.conv2=group_conv(spec.relu1,num_output=32,GROUP=True,kernel_size=3,stride=1)
    spec.relu2=after_conv(spec.conv2)
    spec.concat1=L.Concat(spec.relu1,spec.relu2,axis=1)
    spec.pooling1 = L.Pooling(spec.concat1,pool=P.Pooling.MAX,stride=2,kernel_size=3)
    spec.relu3=after_conv(spec.pooling1)

    spec.conv3=group_conv(spec.relu3,num_output=64,kernel_size=1,stride=1,pad=0)
    spec.relu4=after_conv(spec.conv3)
    spec.conv4=group_conv(spec.relu4,num_output=64,GROUP=True,kernel_size=3,stride=1)
    spec.relu5=after_conv(spec.conv4)

    spec.concat2=L.Concat(spec.pooling1,spec.relu5,axis=1)
    spec.relu6=after_conv(spec.concat2)
    spec.conv5=group_conv(spec.relu6,num_output=128,kernel_size=1,stride=1,pad=0)
    spec.relu7=after_conv(spec.conv5)
    spec.conv6=group_conv(spec.relu7,num_output=128,GROUP=True,kernel_size=3,stride=1)
    spec.relu8=after_conv(spec.conv6)
    spec.concat3=L.Concat(spec.concat2,spec.relu8,axis=1)

    spec.pooling2 = L.Pooling(spec.concat3,pool=P.Pooling.MAX,stride=2,kernel_size=3)
    spec.relu8=after_conv(spec.pooling2)

    spec.conv7=group_conv(spec.relu8,num_output=256,kernel_size=1,stride=1,pad=0)
    spec.relu9=after_conv(spec.conv7)
    spec.conv8=group_conv(spec.relu9,num_output=256,GROUP=True,kernel_size=3,stride=1)
    spec.relu10=after_conv(spec.conv8)    

    spec.concat4=L.Concat(spec.pooling2,spec.relu10,axis=1)
    spec.relu11=after_conv(spec.concat4)

    spec.conv9=group_conv(spec.relu11,num_output=512,kernel_size=1,stride=1,pad=0)
    spec.relu12=after_conv(spec.conv9)
    spec.conv10=group_conv(spec.relu12,num_output=512,GROUP=True,kernel_size=3,stride=1)
    spec.relu13=after_conv(spec.conv10)

    spec.concat5=L.Concat(spec.concat4,spec.relu13,axis=1)
    spec.relu14=after_conv(spec.concat5)
    spec.pooling3 = L.Pooling(spec.relu14,pool=P.Pooling.MAX,stride=2,kernel_size=3)

    spec.relu15=after_conv(spec.pooling3)
    spec.conv11=group_conv(spec.relu15,num_output=1024,kernel_size=1,stride=1,pad=0)
    spec.relu16=after_conv(spec.conv11)
    spec.conv12=group_conv(spec.relu16,num_output=1024,GROUP=True,kernel_size=3,stride=1)
    spec.relu17=after_conv(spec.conv12)

    #OUT 7
    spec.maxpool=L.Pooling(spec.relu17, pool=P.Pooling.AVE,global_pooling=True)
    spec.fc1=L.InnerProduct(spec.maxpool, num_output=1024,weight_filler=dict(type='xavier'))
    #relu1=L.ReLU(fc1, in_place=True)
    spec.fc2 = L.InnerProduct(spec.fc1, num_output=10000,weight_filler=dict(type='xavier'))
    #,phase=0,0对应TRAIN
    spec.loss = L.SoftmaxWithLoss(spec.fc2, spec.label,include=dict(phase=caffe.TRAIN))

    #acc = L.Accuracy(fc2, label)
    #return to_proto(loss, acc,include=dict(phase=TEST))

    if include_acc:             
        return caffe.to_proto(spec.fc1)
    else:
        spec.acc = L.Accuracy(spec.fc2, spec.label,include=dict(phase=caffe.TEST))
        #return spec.to_proto(spec.loss, spec.acc)
        return spec.to_proto()


def write_net():
    with open(train_proto, 'w') as f:
        f.write(str(create_net(train_list,batch_size=96)))
    with open(deploy_proto, 'w') as f:
        f.write(str(create_net(val_list,batch_size=30, include_acc=True)))

if __name__ == '__main__':
    write_net()

Slice

https://github.com/BVLC/caffe/issues/4497

Reshape

https://stackoverflow.com/questions/38480599/how-to-reshape-layer-in-caffe-with-python

Flatten等

https://programtalk.com/python-examples/caffe.NetSpec/

https://github.com/weiliu89/caffe/blob/ssd/python/caffe/model_libs.py

permute_name = "{}_perm".format(name)
net[permute_name] = L.Permute(net[name], order=[0, 2, 3, 1])
flatten_name = "{}_flat".format(name)
net[flatten_name] = L.Flatten(net[permute_name], axis=1)
loc_layers.append(net[flatten_name])
def get_pydot_graph(caffe_net, rankdir, label_edges=True, phase=None, netlayer=None):
    """Create a data structure which represents the `caffe_net`.

    Parameters
    ----------
    caffe_net : object
    rankdir : {'LR', 'TB', 'BT'}
        Direction of graph layout.
    label_edges : boolean, optional
        Label the edges (default is True).
    phase : {caffe_pb2.Phase.TRAIN, caffe_pb2.Phase.TEST, None} optional
        Include layers from this network phase.  If None, include all layers.
        (the default is None)

    Returns
    -------
    pydot graph object
    """
    pydot_graph = pydot.Dot(caffe_net.name if caffe_net.name else 'Net',
                            graph_type='digraph',
                            rankdir=rankdir)
    pydot_nodes = {}
    pydot_edges = []
    for layer in caffe_net.layer:
        if phase is not None:
          included = False
          if len(layer.include) == 0:
            included = True
          if len(layer.include) > 0 and len(layer.exclude) > 0:
            raise ValueError('layer ' + layer.name + ' has both include '
                             'and exclude specified.')
          for layer_phase in layer.include:
            included = included or layer_phase.phase == phase
          for layer_phase in layer.exclude:
            included = included and not layer_phase.phase == phase
          if not included:
            continue
        node_label = get_layer_label(layer, rankdir)
        node_name = "%s_%s" % (layer.name, layer.type)
        if (len(layer.bottom) == 1 and len(layer.top) == 1 and
           layer.bottom[0] == layer.top[0]):
            # We have an in-place neuron layer.
            pydot_nodes[node_name] = pydot.Node(node_label,
                                                **NEURON_LAYER_STYLE)
        else:
            layer_style = LAYER_STYLE_DEFAULT
            layer_style['fillcolor'] = choose_color_by_layertype(layer.type)
            pydot_nodes[node_name] = pydot.Node(node_label, **layer_style)
        for bottom_blob in layer.bottom:
            pydot_nodes[bottom_blob + '_blob'] = pydot.Node('%s' % bottom_blob,
                                                            **BLOB_STYLE)
            edge_label = '""'
            pydot_edges.append({'src': bottom_blob + '_blob',
                                'dst': node_name,
                                'label': edge_label})
        for top_blob in layer.top:
            pydot_nodes[top_blob + '_blob'] = pydot.Node('%s' % (top_blob))
            try:
                edge_label=str(netlayer[str(layer.name)])
            except KeyError:
                edge_label = '""'
            #if label_edges:
            #    edge_label = get_edge_label(layer)
            #else:
            #    edge_label = '""'
            pydot_edges.append({'src': node_name,
                                'dst': top_blob + '_blob',
                                'label': edge_label})
    # Now, add the nodes and edges to the graph.
    for node in pydot_nodes.values():
        pydot_graph.add_node(node)
    for edge in pydot_edges:
        pydot_graph.add_edge(
            pydot.Edge(pydot_nodes[edge['src']],
                       pydot_nodes[edge['dst']],
                       label=edge['label']))
    return pydot_graph


def draw_net(caffe_net, rankdir, ext='png', phase=None, netlayer=None):
    """Draws a caffe net and returns the image string encoded using the given
    extension.

    Parameters
    ----------
    caffe_net : a caffe.proto.caffe_pb2.NetParameter protocol buffer.
    ext : string, optional
        The image extension (the default is 'png').
    phase : {caffe_pb2.Phase.TRAIN, caffe_pb2.Phase.TEST, None} optional
        Include layers from this network phase.  If None, include all layers.
        (the default is None)

    Returns
    -------
    string :
        Postscript representation of the graph.
    """
    return get_pydot_graph(caffe_net, rankdir, phase=phase, netlayer=netlayer).create(format=ext)


def draw_net_to_file(caffe_net, filename, rankdir='LR', phase=None, netlayer=None):
    """Draws a caffe net, and saves it to file using the format given as the
    file extension. Use '.raw' to output raw text that you can manually feed
    to graphviz to draw graphs.

    Parameters
    ----------
    caffe_net : a caffe.proto.caffe_pb2.NetParameter protocol buffer.
    filename : string
        The path to a file where the networks visualization will be stored.
    rankdir : {'LR', 'TB', 'BT'}
        Direction of graph layout.
    phase : {caffe_pb2.Phase.TRAIN, caffe_pb2.Phase.TEST, None} optional
        Include layers from this network phase.  If None, include all layers.
        (the default is None)
    """
    ext = filename[filename.rfind('.')+1:]
    with open(filename, 'wb') as fid:
        fid.write(draw_net(caffe_net, rankdir, ext, phase, netlayer))
def get_layer_label(layer, rankdir):
    """Define node label based on layer type.

    Parameters
    ----------
    layer : ?
    rankdir : {'LR', 'TB', 'BT'}
        Direction of graph layout.

    Returns
    -------
    string :
        A label for the current layer
    """

    if rankdir in ('TB', 'BT'):
        # If graph orientation is vertical, horizontal space is free and
        # vertical space is not; separate words with spaces
        separator = ' '
    else:
        # If graph orientation is horizontal, vertical space is free and
        # horizontal space is not; separate words with newlines
        separator = '\\n'

    if layer.type == 'Convolution' or layer.type == 'Deconvolution':
        # Outer double quotes needed or else colon characters don't parse
        # properly
        node_label = '"%s%s(%s)%skernel size: %d%sstride: %d%spad: %d\ndilation: %d%sgroup: %d"' %\
                     (layer.name,
                      separator,
                      layer.type,
                      separator,
                      layer.convolution_param.kernel_size[0] if len(layer.convolution_param.kernel_size._values) else 1,
                      separator,
                      layer.convolution_param.stride[0] if len(layer.convolution_param.stride._values) else 1,
                      separator,
                      layer.convolution_param.pad[0] if len(layer.convolution_param.pad._values) else 0,
                      layer.convolution_param.dilation[0] if len(layer.convolution_param.dilation._values) else 1,
                      separator,
                      layer.convolution_param.group)
    elif layer.type == 'ConvolutionDepthwise':
        # Outer double quotes needed or else colon characters don't parse
        # properly
        node_label = '"%s%s(%s)%skernel size: %d%sstride: %d%spad: %d\ndilation: %d%sgroup: %d"' %\
                     (layer.name,
                      separator,
                      layer.type,
                      separator,
                      layer.convolution_depthwise_param.kernel_size[0] if len(layer.convolution_depthwise_param.kernel_size._values) else 1,
                      separator,
                      layer.convolution_depthwise_param.stride[0] if len(layer.convolution_depthwise_param.stride._values) else 1,
                      separator,
                      layer.convolution_depthwise_param.pad[0] if len(layer.convolution_depthwise_param.pad._values) else 0,
                      layer.convolution_param.dilation[0] if len(layer.convolution_param.dilation._values) else 1,
                      separator,
                      layer.convolution_param.group)
    elif layer.type == 'Pooling':
        pooling_types_dict = get_pooling_types_dict()
        node_label = '"%s%s(%s %s)%skernel size: %d%sstride: %d%spad: %d"' %\
                     (layer.name,
                      separator,
                      pooling_types_dict[layer.pooling_param.pool],
                      layer.type,
                      separator,
                      layer.pooling_param.kernel_size,
                      separator,
                      layer.pooling_param.stride,
                      separator,
                      layer.pooling_param.pad)
    else:
        node_label = '"%s%s(%s)"' % (layer.name, separator, layer.type)
    return node_label

gist是可以自己上传通过github去解析的网页路径

http://ethereon.github.io/netscope/#/gist/edb4305c4ae1686c326dc7d5b43d6a94

https://cwlacewe.github.io/netscope/#/gist/edb4305c4ae1686c326dc7d5b43d6a94

sudo apt-get install nodejs

sudo apt-get install npm

sudo ln -s /usr/bin/nodejs /usr/bin/node

(原因:https://blog.csdn.net/u012860063/article/details/52253808)

spple@spple:~$ nodejs -v 
v4.2.6
spple@spple:~$ npm -v 
3.5.2

sudo npm install http-server -g
cd netscope

http-server

 

然后打开 http://localhost:8080

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值