keras中trainable问题测试

多gpu情况测试

测试1,冻结原模型的某一层,且此操作在生成多gpu模型之后,是否能够冻结权重?

答案:不能

 

测试2,冻结原模型的某一层,且此操作在生成多gpu模型之前,是否能够冻结权重?

答案:能。

 

结论:冻结某一层只需要冻结原模型,不需要冻结其生成的多gpu模型的对应层。但是冻结后,应重新生成一次多gpu模型。经过多次测试,都证明此结论。

多模型情况测试

参考生成网络结构,代码为

g = generator_model()
d = discriminator_model()


def train_multiple_outputs(batch_size, critic_updates=5, learning_rate=1E-4):
    d_on_g = generator_containing_discriminator_multiple_outputs(g, d)

    d_opt = keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
    d_on_g_opt = keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08)

    d.trainable = True
    d.compile(optimizer=d_opt, loss=wasserstein_loss)
    d.trainable = False
    loss = [perceptual_loss, wasserstein_loss]
    loss_weights = [100, 1]
    d_on_g.compile(optimizer=d_on_g_opt, loss=loss, loss_weights=loss_weights)
    d.trainable = True
    d_losses = []
    d_on_g_losses = []
    output_true_batch, output_false_batch = np.ones((batch_size, 1)), -np.ones((batch_size, 1))
    for index in range(4000):

        image_full_batch, image_blur_batch = generate_one_batch_deblur_imgs(batch_size)

        generated_images = g.predict(x=image_blur_batch, batch_size=batch_size)

        for _ in range(critic_updates):
            d_loss_real = d.train_on_batch(image_full_batch, output_true_batch)
            d_loss_fake = d.train_on_batch(generated_images, output_false_batch)
            d_loss = 0.5 * np.add(d_loss_fake, d_loss_real)
            d_losses.append(d_loss)

        d.trainable = False

        d_on_g_loss = d_on_g.train_on_batch(image_blur_batch, [image_full_batch, output_true_batch])
        d_on_g_losses.append(d_on_g_loss)

        d.trainable = True

测试1:a、b模型,b的输入是a的输出,两者是否公用一个参数?如改变b中a的参数是否a的参数也会变?

 

测试2:单纯调整a的trainable,是否训练b时a中不能训练的参数会改变?

会改变,临时改变a的trainable,训练b,a也会变

 

测试3:

 

 

 

 

 

在Python,特别是在深度学习领域,Keras是一个流行的库,它提供了许多预训练的应用模型,如VGG16、ResNet50等,可以用于图像分类任务。例如,如果你想使用Keras应用的VGG16来处理猫狗二分类问题(Cat-Dog classification),你可以按照以下步骤操作: 1. 首先,安装所需的库: ```bash pip install tensorflow keras ``` 2. 加载预训练的VGG16模型,并冻结除了最后一层之外的所有层,因为我们需要用新的数据集调整分类部分: ```python from tensorflow.keras.applications.vgg16 import VGG16 from tensorflow.keras.applications.vgg16 import preprocess_input from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.models import Model vgg = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) x = Flatten()(vgg.output) output_layer = Dense(1, activation='sigmoid')(x) # 对于二分类,输出层只有一个神经元 model = Model(inputs=vgg.input, outputs=output_layer) for layer in model.layers[:-1]: layer.trainable = False # 冻结所有预训练层 ``` 3. 准备Cat-Dog数据集,将其分为训练集和验证集,并进行数据增强(Data Augmentation)以增加样本多样性: ```python from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input, rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') validation_datagen = ImageDataGenerator(preprocess_input, rescale=1./255) train_generator = train_datagen.flow_from_directory(directory="path_to_train_data", target_size=(224, 224), batch_size=32, class_mode='binary') validation_generator = validation_datagen.flow_from_directory(directory="path_to_val_data", target_size=(224, 224), batch_size=32, class_mode='binary') ``` 4. 编译并训练模型: ```python model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit(train_generator, epochs=10, validation_data=validation_generator) ``` 5. 测试模型的准确率: ```python test_loss, test_accuracy = model.evaluate(validation_generator) print(f"Test accuracy: {test_accuracy * 100:.2f}%") ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值