Python 无法从keras.layers.normalization导入 LayerNormalization

今天编这个Python人工智能就遇到一个问题,废话不多说,直接上报错信息↓

ImportError: cannot import name 'LayerNormalization' 
from 'tensorflow.python.keras.layers.normalization'

 根据网上很多种方法都解决不了,然后呢我就把最新的keras 2.6.0版本换成了旧版(2.0.0)

安装完了呢,我就再运行下面代码

from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
import keras
(train_datas,train_labels),(test_datas,test_labels)=mnist.load_data()
train_datas=train_datas.reshape(60000,28*28)
test_datas=test_datas.reshape(10000,28*28)

train_datas=train_datas/255
test_datas=test_datas/255

train_labels=keras.utils.to_categorical(train_labels,10)
test_labels=keras.utils.to_categorical(test_labels,10)

model=Sequential()
# 隐藏层
model.add(Dense(1000,activation="relu",input_shape=(28*28,)))
# 隐藏层
model.add(Dense(500,activation="relu"))
# 输出层
model.add(Dense(10,activation="softmax"))
model.summary()

model.compile(optimizer=SGD(),loss="categorical_crossentropy",
              metrics=["accuracy"])
model.fit(train_datas,train_labels,batch_size=64,epochs=5,
          validation_data=(test_datas,test_labels))

score=model.evaluate(test_datas,test_labels)
print("损失值",score[0])
print("准确率",score[1])
model.save("mnist_model.h5")

 但是这个系统又给我抛来一个错误

Traceback (most recent call last):
  File "F:/MyProject/main.py", line 16, in <module>
    model=Sequential()
  File "C:\Users\xxx\AppData\Roaming\Python\Python36\site-packages\keras\models.py", line 381, in __init__
    name = prefix + str(K.get_uid(prefix))
  File "C:\Users\xxx\AppData\Roaming\Python\Python36\site-packages\keras\backend\tensorflow_backend.py", line 47, in get_uid
    graph = tf.get_default_graph()
AttributeError: module 'tensorflow' has no attribute 'get_default_graph'

这个时候呢,我们将从keras导入改为从tensorflow下的keras导入,更改后如下

from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
import tensorflow.keras as keras
(train_datas,train_labels),(test_datas,test_labels)=mnist.load_data()
train_datas=train_datas.reshape(60000,28*28)
test_datas=test_datas.reshape(10000,28*28)

train_datas=train_datas/255
test_datas=test_datas/255

train_labels=keras.utils.to_categorical(train_labels,10)
test_labels=keras.utils.to_categorical(test_labels,10)

model=Sequential()
# 隐藏层
model.add(Dense(1000,activation="relu",input_shape=(28*28,)))
# 隐藏层
model.add(Dense(500,activation="relu"))
# 输出层
model.add(Dense(10,activation="softmax"))
model.summary()

model.compile(optimizer=SGD(),loss="categorical_crossentropy",
              metrics=["accuracy"])
model.fit(train_datas,train_labels,batch_size=64,epochs=5,
          validation_data=(test_datas,test_labels))

score=model.evaluate(test_datas,test_labels)
print("损失值",score[0])
print("准确率",score[1])
model.save("mnist_model.h5")

不同keras版本,tensorflow版本可能运行效果会不同,我的keras和tensorflow版本分别是2.0.0和2.5.0(tensorflow为最新版本),我的运行效果:

2021-08-14 13:01:00.929419: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-08-14 13:01:00.929820: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-08-14 13:01:03.952065: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2021-08-14 13:01:04.797760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:02:00.0 name: GeForce MX150 computeCapability: 6.1
coreClock: 1.5315GHz coreCount: 3 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 44.76GiB/s
2021-08-14 13:01:04.800119: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-08-14 13:01:04.801820: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found
2021-08-14 13:01:04.803537: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found
2021-08-14 13:01:04.805275: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found
2021-08-14 13:01:04.806951: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found
2021-08-14 13:01:04.808840: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cusolver64_11.dll'; dlerror: cusolver64_11.dll not found
2021-08-14 13:01:04.810682: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found
2021-08-14 13:01:04.812556: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2021-08-14 13:01:04.812951: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1766] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-08-14 13:01:04.819549: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-14 13:01:04.821656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-08-14 13:01:04.822126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 1000)              785000    
_________________________________________________________________
dense_1 (Dense)              (None, 500)               500500    
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5010      
=================================================================
Total params: 1,290,510
Trainable params: 1,290,510
Non-trainable params: 0
_________________________________________________________________
2021-08-14 13:01:05.899488: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/5
938/938 [==============================] - 10s 10ms/step - loss: 0.7354 - accuracy: 0.8320 - val_loss: 0.3555 - val_accuracy: 0.9035
Epoch 2/5
938/938 [==============================] - 10s 11ms/step - loss: 0.3255 - accuracy: 0.9103 - val_loss: 0.2804 - val_accuracy: 0.9227
Epoch 3/5
938/938 [==============================] - 9s 10ms/step - loss: 0.2709 - accuracy: 0.9244 - val_loss: 0.2454 - val_accuracy: 0.9310
Epoch 4/5
938/938 [==============================] - 9s 9ms/step - loss: 0.2387 - accuracy: 0.9329 - val_loss: 0.2218 - val_accuracy: 0.9370
Epoch 5/5
938/938 [==============================] - 9s 9ms/step - loss: 0.2148 - accuracy: 0.9398 - val_loss: 0.1998 - val_accuracy: 0.9436
313/313 [==============================] - 1s 3ms/step - loss: 0.1998 - accuracy: 0.9436
损失值 0.19979436695575714
准确率 0.9435999989509583

Process finished with exit code 0

  • 6
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 8
    评论
`tf.keras.layers.LayerNormalization` 是 TensorFlow 中的一个层,用于实现层归一化(Layer Normalization)操作。 层归一化是一种归一化技术,旨在在深度神经网络中减少内部协变量偏移(Internal Covariate Shift)。它可以将每个样本的特征进行归一化,而不是整个批次。 层归一化的计算方式如下: 1. 对于一个输入张量 x,计算其均值 μ 和方差 σ。 2. 使用以下公式对输入进行归一化:(x - μ) / sqrt(σ^2 + ε),其中 ε 是一个小的常数,用于防止除以零。 3. 使用两个可训练参数(缩放因子和偏移量)对归一化后的值进行缩放和平移:gamma * 归一化值 + beta。 `tf.keras.layers.LayerNormalization` 可以作为神经网络模型的一层,在模型中应用层归一化操作。它可以应用于任何维度的输入张量,并且可以在训练过程中自动更新可训练参数。 以下是一个使用 `tf.keras.layers.LayerNormalization` 的简单示例: ```python import tensorflow as tf # 创建一个 LayerNormalizationlayer_norm = tf.keras.layers.LayerNormalization() # 创建一个输入张量 input_tensor = tf.keras.Input(shape=(64,)) # 应用层归一化操作 normalized_tensor = layer_norm(input_tensor) # 创建一个模型 model = tf.keras.Model(inputs=input_tensor, outputs=normalized_tensor) ``` 在上述示例中,`input_tensor` 是一个形状为 (batch_size, 64) 的输入张量。`normalized_tensor` 是应用层归一化操作后的输出张量。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值