python调用class时no model,使用Keras的Python代码在调用model.fit时崩溃,没有错误代码...

I have successfully implemented and run an autoencoder on image data (MNIST digits). I use Spyder through Anaconda Navigator. I'm running Python 3.7.1.

I constructed a simple CNN following vetted examples. My code executes through completion of the model and loading of training data (in this case, CIFAR10). When I call model.fit() the code crashes with no error and leaving no variables in the kernel.

How might I monitor execution of this code to better understand why it is crashing?

Have I coded something incorrectly that is causing the crash? Or, perhaps is this an environment or memory error?

I have copied similar code from presumably working CNN examples and replicated the behavior with published code (Although my autoencoder code works in the same environment).

Here is the relevant section of my code:

from keras.layers import Input, Dense, Flatten, Conv2D, MaxPooling2D

from keras.models import Model

from keras.utils import to_categorical

from keras.datasets import cifar10

proceedtofit = True

#define input shape

input=Input(shape=(32,32,3))

#define layers

predictions=Conv2D(16,(3,3),activation='relu',padding='same')(input)

predictions=MaxPooling2D(pool_size=(2,2),strides=None,padding='same')(predictions)

predictions=Conv2D(4,(3,3),activation='relu',padding='same')(predictions)

predictions=MaxPooling2D(pool_size=(2,2),strides=None,padding='same')(predictions)

predictions=Flatten()(predictions)

predictions=Dense(32,activation='relu')(predictions)

predictions=Dense(10,activation='sigmoid')(predictions)

#integrate into model

model=Model(inputs=input,outputs=predictions)

#print("Succesfully integrated model.")

model.summary()

#compile (choose optimizer and loss function)

model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer='adam')

#input training and test data

(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Convert class vectors to binary class matrices.

x_train = x_train.astype('float32')

x_test = x_test.astype('float32')

x_train /= 255

x_test /= 255

y_train = to_categorical(y_train, 10)

y_test = to_categorical(y_test, 10)

#train model

if proceedtofit:

model.fit(x_train, y_train, batch_size=10, epochs=50, shuffle=True,

validation_data=(x_test, y_test))

print("Finished fit.")

The code executes in the kernel and produces the expected model summary. If proceedtofit is False, then the code exits gracefully. If proceedtofit is True, then the code calls the model.fit() method and crashes. The verbose output start to finish is:

Python 3.7.0 (default, Jun 28 2018, 07:39:16)

Type "copyright", "credits" or "license" for more information.

IPython 7.2.0 -- An enhanced Interactive Python.

runfile('/Users/Fox/Documents/Python Machine Learning/convclass.py', wdir='/Users/Fox/Documents/Python Machine Learning')

WARNING:tensorflow:From /Applications/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.

Instructions for updating:

Colocations handled automatically by placer.

Using TensorFlow backend.

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

input_1 (InputLayer) (None, 32, 32, 3) 0

_________________________________________________________________

conv2d_1 (Conv2D) (None, 32, 32, 16) 448

_________________________________________________________________

max_pooling2d_1 (MaxPooling2 (None, 16, 16, 16) 0

_________________________________________________________________

conv2d_2 (Conv2D) (None, 16, 16, 4) 580

_________________________________________________________________

max_pooling2d_2 (MaxPooling2 (None, 8, 8, 4) 0

_________________________________________________________________

flatten_1 (Flatten) (None, 256) 0

_________________________________________________________________

dense_1 (Dense) (None, 32) 8224

_________________________________________________________________

dense_2 (Dense) (None, 10) 330

=================================================================

Total params: 9,582

Trainable params: 9,582

Non-trainable params: 0

_________________________________________________________________

(50000, 1)

(50000, 10)

WARNING:tensorflow:From /Applications/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.

Instructions for updating:

Use tf.cast instead.

Train on 50000 samples, validate on 10000 samples

Epoch 1/50

2019-08-04 16:32:52.400023: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX

2019-08-04 16:32:52.400364: I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance.

At this point, the code exits and returns me to the kernel prompt. Training (fitting) did not execute, and returned no error. The model is no longer present in memory. That is, calling model.summary() at the prompt yields the following error:

[1]:model.summary()

Traceback (most recent call last):

File "", line 1, in

model.summary()

NameError: name 'model' is not defined

Following a comment, I ran the code in a terminal. I did get more verbose output and an error report. I don't understand it yet, but at least it is a place to start. Thoughts? (See below.)

OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.

OMP: Hint: This means that multiple copies of the OpenMP runtime have been

linked into the program. That is dangerous, since it can degrade performance or

cause incorrect results. The best thing to do is to ensure that only a single

OpenMP runtime is linked into the process, e.g. by avoiding static linking of the

OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround

you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program

to continue to execute, but that may cause crashes or silently produce incorrect

results. For more information, please see

http://www.intel.com/software/products/support/.

Abort trap: 6

I found this. Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized Looks promising. I will explore the suggestions offered and then perhaps the question should be combined with the other discussion?

解决方案

After running the code in a command shell rather than Spyder, I captured the error and identified a related question that had already been answered.

Based on the discussion in: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized I removed tensorflow using conda remove tensorflow and then reinstalled tensorflow and keras using

conda install -c tensorflow

and

conda install -c keras

I then reran the code and everything worked in both the command shell and in Spyder.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
`model.fit()`是Keras用于训练模型的函数,它的基本语法如下: ```python model.fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False) ``` 其,参数的含义如下: - `x`:输入数据,通常是Numpy数组。如果模型有多个输入,可以传入一个列表。 - `y`:标签,通常是Numpy数组。如果模型有多个输出,可以传入一个列表。 - `batch_size`:整数,指定进行梯度下降每个batch包含的样本数,默认值为32。 - `epochs`:整数,训练的轮数(即迭代次数),默认为1。 - `verbose`:日志显示模式,0表示不在标准输出流输出日志信息,1表示输出进度条记录,2表示每个epoch输出一行记录。 - `callbacks`:Keras回调函数列表。回调函数是在训练过程的特定调用的函数,用于实现各种功能,如保存模型、记录训练记录等。 - `validation_split`:0到1之间的浮点数,指定用作验证集的训练数据的比例。模型将分割出一部分不会被训练的验证数据,并根据每个epoch结束的验证损失和验证指标进行评估。 - `validation_data`:验证集。该参数可以是输入数据和标签的元组,也可以是一个生成器。 - `shuffle`:布尔值,表示是否在每个epoch开始随机打乱输入数据。 - `class_weight`:用于对不同类别的样本赋予不同的权重,以平衡训练数据不同类别的样本数量差异。 - `sample_weight`:样本权重,用于对每个样本赋予不同的权重,以调整损失函数的贡献。 - `initial_epoch`:训练的起始轮数。 - `steps_per_epoch`:整数或None。每个epoch结束后,将执行一次评估和可选的模型检查。在执行此评估之前,将执行steps_per_epoch个训练步骤。如果未指定,则使用len(x)/batch_size作为默认值。 - `validation_steps`:仅在steps_per_epoch被指定有意义。在执行此评估之前,将执行validation_steps个验证步骤。如果未指定,则使用len(validation_data)/batch_size作为默认值。 - `validation_batch_size`:验证集的batch大小。 - `validation_freq`:指定验证的频率。默认为1,表示每个epoch结束都进行验证;如果为2,则表示每两个epoch进行一次验证,以此类推。 - `max_queue_size`:整数,生成器队列的最大容量。 - `workers`:整数,用于生成器并行处理的最大进程数。 - `use_multiprocessing`:布尔值,是否使用多进程并行处理。在Windows系统上通常需要将其设置为False。 其,`x`和`y`是必须的参数,其他参数都有默认值,可以根据实际情况进行调整。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值