Accuracy metrics
tf.keras.metrics.Accuracy(name="accuracy", dtype=None)
Calculates how often predictions equal labels.
This metric creates two local variables, total
and count
that are used to compute the frequency with which y_pred
matches y_true
. This frequency is ultimately returned as binary accuracy
: an idempotent operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1. Use sample_weight
of 0 to mask values.
Arguments
- name: (Optional) string name of the metric instance.
- dtype: (Optional) data type of the metric result.
CategoricalAccuracy
class
tf.keras.metrics.CategoricalAccuracy(name="categorical_accuracy", dtype=None)
Calculates how often predictions match one-hot labels.
You can provide logits of classes as y_pred
, since argmax of logits and probabilities are same.
This metric creates two local variables, total
and count
that are used to compute the frequency with which y_pred
matches y_true
. This frequency is ultimately returned as categorical accuracy
: an idempotent operation that simply divides total
by count
.
y_pred
and y_true
should be passed in as vectors of probabilities, rather than as labels. If necessary, use tf.one_hot
to expand y_true
as a vector.
If sample_weight
is None
, weights default to 1. Use sample_weight
of 0 to mask values.
Arguments
- name: (Optional) string name of the metric instance.
- dtype: (Optional) data type of the metric result.
BinaryAccuracy
class
tf.keras.metrics.BinaryAccuracy(
name="binary_accuracy", dtype=None, threshold=0.5
)
Calculates how often predictions match binary labels.
This metric creates two local variables, total
and count
that are used to compute the frequency with which y_pred
matches y_true
. This frequency is ultimately returned as binary accuracy
: an idempotent operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1. Use sample_weight
of 0 to mask values.
Arguments
- name: (Optional) string name of the metric instance.
- dtype: (Optional) data type of the metric result.
- threshold: (Optional) Float representing the threshold for deciding whether prediction values are 1 or 0.
搬运画重点,compile的metrics仅用“accuracy”即可。loss这个 CategoricalAccuracy要one hot形式的label,sparse_categorical_accuracy直接label即可。
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Source
def categorical_accuracy(y_true, y_pred):
return K.cast(K.equal(K.argmax(y_true, axis=-1),
K.argmax(y_pred, axis=-1)),
K.floatx())
def sparse_categorical_accuracy(y_true, y_pred):
return K.cast(K.equal(K.max(y_true, axis=-1),
K.cast(K.argmax(y_pred, axis=-1), K.floatx())),
K.floatx())