keras Usage of metrics 评价指标

Usage of metrics

A metric is a function that is used to judge the performance of your model. Metric functions are to be supplied in the metrics parameter when a model is compiled.

model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=['mae', 'acc'])
from keras import metrics

model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=[metrics.mae, metrics.categorical_accuracy])

A metric function is similar to an loss function, except that the results from evaluating a metric are not used when training the model.

You can either pass the name of an existing metric, or pass a Theano/TensorFlow symbolic function (see Custom metrics).

Arguments
  • y_true: True labels. Theano/TensorFlow tensor.
  • y_pred: Predictions. Theano/TensorFlow tensor of the same shape as y_true.
Returns

Single tensor value representing the mean of the output array across all datapoints.


Available metrics

binary_accuracy

binary_accuracy(y_true, y_pred)

categorical_accuracy

categorical_accuracy(y_true, y_pred)

sparse_categorical_accuracy

sparse_categorical_accuracy(y_true, y_pred)

top_k_categorical_accuracy

top_k_categorical_accuracy(y_true, y_pred, k=5)

sparse_top_k_categorical_accuracy

sparse_top_k_categorical_accuracy(y_true, y_pred, k=5)

Custom metrics

Custom metrics can be passed at the compilation step. The function would need to take (y_true, y_pred) as arguments and return a single tensor value.

import keras.backend as K

def mean_pred(y_true, y_pred):
    return K.mean(y_pred)

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy', mean_pred])
### TensorFlow 2 with Keras Usage and Tutorials In TensorFlow 2.0, the construction of estimators has undergone changes compared to previous versions[^1]. This shift emphasizes a more streamlined approach towards building models using high-level APIs like `tf.keras`, which simplifies many aspects of model development. For saving and loading models built with Keras in TensorFlow 2.x, comprehensive guidance is available from official documentation[^2]. The process involves several key steps including defining the architecture of the neural network, compiling it by specifying an optimizer, loss function, and metrics, followed by training on data. Afterward, one can save both weights (the parameters learned during training) or even entire models that include not only weights but also configuration details such as layer types and connections between them. Loading these saved entities allows users to resume training where they left off without having to recompile everything from scratch each time. When working specifically within keras for constructing custom losses—such as applying label smoothing—the syntax follows: ```python cce_loss = tensorflow.keras.losses.CategoricalCrossentropy(label_smoothing=0.1) ``` This line creates an instance named `cce_loss` configured with categorical cross entropy while incorporating slight adjustments through label smoothing parameter set at 0.1[^3]. To handle structured datasets effectively when performing classification tasks utilizing feature columns alongside other functionalities provided under tf.data API ensures efficient preprocessing pipelines before feeding into any deep learning algorithm implemented via Keras layers stack inside Sequential class objects or Functional API constructs[^4]: A practical example demonstrating command-line options offered by script `bx.py` showcases flexibility regarding whether you want to perform training versus inference along with choosing between categorized outputs against non-categorized ones based upon user input flags passed during execution calls made outside Python environment directly over terminal interface.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值