keras中获取层输出shape的方法汇总

【时间】2018.12.24

【题目】keras中获取层输出shape的方法汇总

概述

在keras 中,要想获取层输出shape,可以先获取层对象,再通过层对象的属性output或者output_shape获得层输出shape(若要获取层输入shape,可以用input/input_shape)。两者的输出分别为:

output: (红框部分分别表示层名以及输出shape)

output_shape:(即为层输出shape)

获取层对象的方法有两种,一种是使用model.get_layer()方法,另一种是使用model.layers[index]。

当然,你也可以使用model.summary()打印出整个模型,从而可以获取层输出shape。

一、方法一

使用model.get_layer(self,name=None,index=None):依据层名或下标获得层对象model.get_layer(self,name=None,index=None)

具体为:

1.1 特定层输出:

model.get_layer(index=0).output或者

model.get_layer(index=0).output_shape

1.2 所有层的输出

for i in range(len(model.layers)):

    print(model.get_layer(index=i).output)

二、方法二

使用model.layers[index]获取层对象,其余与方法一类似。

2.1 特定层输出:

model.layers[0].output或者

model.layers[0].output_shape

2.2 所有层的输出

for layer in model.layers:

    print(layer.output)

三、方法三:使用model.summary()打印出整个模型

四、测试

【测试代码】

from keras.models import Model

from keras.layers import Dense, Dropout, Activation, Flatten,Input

from keras.layers import Conv2D, MaxPooling2D

x = Input(shape=(96,96,3))

conv1_1 = Conv2D(64,kernel_size=(3,3),padding='valid', activation='relu', name='conv_1')(x)

conv1_2 = Conv2D(64,kernel_size=(3,3),padding='same', activation='relu', name='conv_2')(conv1_1)

pool1_1 = MaxPooling2D((2, 2), strides=(2, 2), name='pool_frame_1')(conv1_2)

conv1_3 = Conv2D(128,kernel_size=(3,3),padding='same', activation='relu', name='conv_3')(pool1_1)

conv1_4 = Conv2D(128,kernel_size=(3,3),padding='same', activation='relu', name='conv_4')(conv1_3)

pool1_2 = MaxPooling2D((2,2), strides=(2, 2), name='pool_frame_2')(conv1_4)



conv_5 = Conv2D(256,kernel_size=(3,3),padding='same', activation='relu', name='conv_5')(pool1_2)

conv_6 = Conv2D(256,kernel_size=(3,3),padding='same', activation='relu', name='conv_6')(conv_5)

conv_7 = Conv2D(256,kernel_size=(3,3),padding='same', activation='relu', name='conv_7')(conv_6)

pool_3 = MaxPooling2D((2,2), strides=(2, 2), name='pool_final_3')(conv_7)

conv_8 = Conv2D(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_8')(pool_3)

conv_9 = Conv2D(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_9')(conv_8)

conv_10 = Conv2D(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_10')(conv_9)

pool_4 = MaxPooling2D((2,2), strides=(2, 2), name='pool_final_4')(conv_10)

conv_11 = Conv2D(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_11')(pool_4)

conv_12 = Conv2D(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_12')(conv_11)

conv_13 = Conv2D(512,kernel_size=(3,3),padding='same', activation='relu', name='conv_13')(conv_12)

pool_5 = MaxPooling2D((2,2), strides=(2, 2), name='pool_final_5')(conv_13)



flatten = Flatten()(pool_5)

fc1= Dense(256, activation='relu')(flatten)

out_put = Dense(2, activation='softmax')(fc1)

model = Model(inputs=x, outputs=out_put)

model.compile(loss='categorical_crossentropy',

              optimizer='adadelta',

              metrics=['accuracy'])

print('method 3:')

model.summary()  # method 3



print('method 1:')

for i in range(len(model.layers)):

    print(model.get_layer(index=i).output)

print('method 2:')

for layer in model.layers:

    print(layer.output_shape)

 

【运行结果】

 

  • 7
    点赞
  • 41
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值