使用keras.backend.mean()后KerasTensor的shape变成()而不是(None, 1)所导致的问题与解决办法

问题描述

  1. 在基于tensorflow和keras框架的网络建模中,通过编译工具的Debug功能我们会发现网络中数据流的shape为(None, d1, d2, d3, ...),即shape的第一个位置为None,它代表网络可以一次接收batch_size个数据,便于数据流动和网络计算;

  2. 但是,对张量使用keras.backend.mean()方法求均值(如计算两个张量的MSE过程中)后,计算结果的shape变成了(),而不是(None, 1)

"""
input_ : 编码前的输入张量, 假设其Debug中的shape为(None, 120, 10)
output_ : 解码后的输出张量, 假设其Debug中的shape为(None, 120, 10)
"""
# 计算二者的MSE损失值
from keras import backend as K

# step1 : 计算绝对差值的平方
a = K.square(input_  - output_)		# a的KerasTensor shape:(None, 120, 10)

# step2 : 求均值
b = K.mean(a)						# b的KerasTensor shape:()
  1. 这可能会导致:当输入数据的个数一定时,网络输出数据的个数会随着参数batch_size大小的改变而发生变化:

假设搭建了一个基于重构方法的模型,输入20个数据,我希望网络能够输出20个重构loss值(MSE)。但是,当batch_size=2时,网络输出10个loss值;当batch_size=4时,网络输出5个loss值。


原因分析:

当张量KerasTensor数据的shape为()时,表明该数据是一个标量,正是因为它的shape不对,导致无论batch_size大小被设为多少,该轮batch都只会产生一个结果


解决方案:

解决思路很简单,就是将shape由()改成(None, 1)

操作方式:

# 计算二者的MSE损失值
import tensorflow as tf
from keras import backend as K

# step1 : 对张量进行reshape
input_ = tf.reshape(input_, [-1, 120 * 10])
output_ = tf.reshape(output_, [-1, 120 * 10])

# step2 : 计算绝对差值的平方
a = K.square(input_  - output_)			# a的KerasTensor shape:(None, 120, 10)

# step3 : 求均值
b = K.mean(a, axis=-1, keepdims=True)	# b的KerasTensor shape:(None, 1)
AttributeError: module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'. Did you mean: 'register_call_context_function'? PS D:\Learn\Keras-GAN-master&gt; ^C PS D:\Learn\Keras-GAN-master&gt; ^C PS D:\Learn\Keras-GAN-master&gt; & D:/Anaconda3/envs/tf1cpu/python.exe d:/Learn/Keras-GAN-master/context_encoder/context_encoder.py 2025-03-19 15:44:14.482172: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-03-19 15:44:15.713157: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Traceback (most recent call last): File "d:\Learn\Keras-GAN-master\context_encoder\context_encoder.py", line 7, in <module&gt; from keras.layers.advanced_activations import LeakyReLU ModuleNotFoundError: No module named 'keras.layers.advanced_activations' PS D:\Learn\Keras-GAN-master&gt; ^C 修改代码吧 from keras.datasets import cifar10 from keras.layers import Input, Dense, Reshape, Flatten, Dropout, multiply, GaussianNoise from keras.layers import BatchNormalization, Activation, Embedding, ZeroPadding2D from keras.layers import MaxPooling2D from keras.layers.advanced_activations import LeakyReLU from keras.layers.convolutional import UpSampling2D, Conv2D from keras.models import Sequential, Model from keras.optimizers import Adam from keras import losses from keras.utils import to_categorical import keras.backend as K import matplotlib.pyplot as plt import numpy as np class ContextEncoder(): def __init__(self): self.img_rows = 32 self.img_cols = 32 self.mask_height = 8 self.mask_width = 8
最新发布
03-20
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值