tf.metrics.accuracy()与tf.reduce_mean(tf.cast(tf.argmax(z, 1), tf.argmax(y, 1)), tf.float32)

今天发现在两个计算精确率准确率的玩意儿对不上,满脑子懵了……
感谢这位博主写的文章:tf.metrics.accuracy计算的是正确率吗

所以说【tf.reduce_mean(tf.cast(tf.argmax(z, 1), tf.argmax(y, 1)), tf.float32)】计算的是本batch正确率:

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(z, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

而【tf.metrics.accuracy()】计算的是整个session生存期内所有feed_dict中的数据的正确率。

accuracy = tf.metrics.accuracy(labels=tf.argmax(y, axis=1), predictions=tf.argmax(z, axis=1))[1]
在使用迁移学习中的MMD损失替换categorical_crossentropy时,需要先导入MMD损失函数。可以使用以下代码导入MMD损失函数: ```python import tensorflow as tf import numpy as np def compute_kernel(x, y): x_size = tf.shape(x)[0] y_size = tf.shape(y)[0] dim = tf.shape(x)[1] tiled_x = tf.tile(tf.reshape(x, tf.stack([x_size, 1, dim])), tf.stack([1, y_size, 1])) tiled_y = tf.tile(tf.reshape(y, tf.stack([1, y_size, dim])), tf.stack([x_size, 1, 1])) return tf.exp(-tf.reduce_mean(tf.square(tiled_x - tiled_y), axis=[2]) / tf.cast(dim, tf.float32)) def maximum_mean_discrepancy(x, y): x_kernel = compute_kernel(x, x) y_kernel = compute_kernel(y, y) xy_kernel = compute_kernel(x, y) return tf.reduce_mean(x_kernel) + tf.reduce_mean(y_kernel) - 2 * tf.reduce_mean(xy_kernel) def mmd_loss(source_samples, target_samples, weight): """Calculate the Maximum Mean Discrepancy (MMD) loss for domain adaptation. The MMD measures the distance between the empirical distribution of the source samples and the empirical distribution of the target samples. Parameters: source_samples (tensor): a tensor of shape (batch_size, num_features) that contains the source samples. target_samples (tensor): a tensor of shape (batch_size, num_features) that contains the target samples. weight (float): a scalar weighting factor for the MMD loss. Returns: The MMD loss for the given source and target samples. """ mmd = maximum_mean_discrepancy(source_samples, target_samples) return weight * mmd ``` 然后,可以在编译模型时将MMD损失函数作为代替categorical_crossentropy。示例如下: ```python from keras.optimizers import Adam # 编译模型 model.compile(optimizer=Adam(lr=0.0001), loss=mmd_loss, metrics=['accuracy']) ``` 在这个示例中,我们将MMD损失函数作为模型的损失函数,同时使用Adam优化器进行模型训练。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值