deep_learning_Function_tf.argmax()解析

tf.argmax(input,axis)根据axis取值的不同返回每行或者每列最大值的索引。 这个很好理解,只是tf.argmax()的参数让人有些迷惑,比如,tf.argmax(array, 1)和tf.argmax(array, 0)有啥区别呢? 这里面就涉及到一个概念:axis。上面例子中的1和0就是axis。我先笼统的解释这个问题,设置axis的主要原因是方便我们进行多个维度的计算。

  比如:

test = np.array([[1, 2, 3], [2, 3, 4], [5, 4, 3], [8, 7, 2]])
np.argmax(test, 0)   #输出:array([3, 3, 1]
np.argmax(test, 1)   #输出:array([2, 2, 0, 0]123

axis = 0:
  axis=0时比较每一列的元素,将每一列最大元素所在的索引记录下来,最后输出每一列最大元素所在的索引数组。
   

test[0] = array([1, 2, 3])
test[1] = array([2, 3, 4])
test[2] = array([5, 4, 3])
test[3] = array([8, 7, 2])
# output   :    [3, 3, 1]  

axis = 1:
  axis=1的时候,将每一行最大元素所在的索引记录下来,最后返回每一行最大元素所在的索引数组。

test[0] = array([1, 2, 3])  #2
test[1] = array([2, 3, 4])  #2
test[2] = array([5, 4, 3])  #0
test[3] = array([8, 7, 2])  #0


  这是里面都是数组长度一致的情况,如果不一致,axis最大值为最小的数组长度-1,超过则报错。
  当不一致的时候,axis=0的比较也就变成了每个数组的和的比较。

参考文档
tf.argmax的使用
tf.argmax()以及axis解析
————————————————
原文链接:https://blog.csdn.net/u012300744/article/details/81240580

转载于:https://www.cnblogs.com/0405mxh/p/11603479.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
逐行详细解释以下代码并加注释from tensorflow import keras import matplotlib.pyplot as plt base_image_path = keras.utils.get_file( "coast.jpg", origin="https://img-datasets.s3.amazonaws.com/coast.jpg") plt.axis("off") plt.imshow(keras.utils.load_img(base_image_path)) #instantiating a model from tensorflow.keras.applications import inception_v3 model = inception_v3.InceptionV3(weights='imagenet',include_top=False) #配置各层对DeepDream损失的贡献 layer_settings = { "mixed4": 1.0, "mixed5": 1.5, "mixed6": 2.0, "mixed7": 2.5, } outputs_dict = dict( [ (layer.name, layer.output) for layer in [model.get_layer(name) for name in layer_settings.keys()] ] ) feature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict) #定义损失函数 import tensorflow as tf def compute_loss(input_image): features = feature_extractor(input_image) loss = tf.zeros(shape=()) for name in features.keys(): coeff = layer_settings[name] activation = features[name] loss += coeff * tf.reduce_mean(tf.square(activation[:, 2:-2, 2:-2, :])) return loss #梯度上升过程 @tf.function def gradient_ascent_step(image, learning_rate): with tf.GradientTape() as tape: tape.watch(image) loss = compute_loss(image) grads = tape.gradient(loss, image) grads = tf.math.l2_normalize(grads) image += learning_rate * grads return loss, image def gradient_ascent_loop(image, iterations, learning_rate, max_loss=None): for i in range(iterations): loss, image = gradient_ascent_step(image, learning_rate) if max_loss is not None and loss > max_loss: break print(f"... Loss value at step {i}: {loss:.2f}") return image #hyperparameters step = 20. num_octave = 3 octave_scale = 1.4 iterations = 30 max_loss = 15. #图像处理方面 import numpy as np def preprocess_image(image_path): img = keras.utils.load_img(image_path) img = keras.utils.img_to_array(img) img = np.expand_dims(img, axis=0) img = keras.applications.inception_v3.preprocess_input(img) return img def deprocess_image(img): img = img.reshape((img.shape[1], img.shape[2], 3)) img /= 2.0 img += 0.5 img *= 255. img = np.clip(img, 0, 255).astype("uint8") return img #在多个连续 上运行梯度上升 original_img = preprocess_image(base_image_path) original_shape = original_img.shape[1:3] successive_shapes = [original_shape] for i in range(1, num_octave): shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape]) successive_shapes.append(shape) successive_shapes = successive_shapes[::-1] shrunk_original_img = tf.image.resize(original_img, successive_shapes[0]) img = tf.identity(original_img) for i, shape in enumerate(successive_shapes): print(f"Processing octave {i} with shape {shape}") img = tf.image.resize(img, shape) img = gradient_ascent_loop( img, iterations=iterations, learning_rate=step, max_loss=max_loss ) upscaled_shrunk_original_img = tf.image.resize(shrunk_original_img, shape) same_size_original = tf.image.resize(original_img, shape) lost_detail = same_size_original - upscaled_shrunk_original_img img += lost_detail shrunk_original_img = tf.image.resize(original_img, shape) keras.utils.save_img("DeepDream.png", deprocess_image(img.numpy()))
06-07

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值