关于python 中np.newaxis的用法

顾名思义,np.newaxis的作用就是选取部分的数据增加一个维度,如用创建如下一个4*4的数组

array=random.rand(4,4)

输出为

array([[0.45284467, 0.27883581, 0.72870975, 0.03455946],
       [0.74005136, 0.52413785, 0.78433733, 0.80114353],
       [0.16559874, 0.56112999, 0.18464461, 0.38968731],
       [0.05684794, 0.50929997, 0.45789637, 0.63199181]])

使用np.newaxis

>>> array_add_aix=array[:,np.newaxis]

输出结果为:

array([[[0.45284467, 0.27883581, 0.72870975, 0.03455946]],

       [[0.74005136, 0.52413785, 0.78433733, 0.80114353]],

       [[0.16559874, 0.56112999, 0.18464461, 0.38968731]],

       [[0.05684794, 0.50929997, 0.45789637, 0.63199181]]])

以上是默认选取全部的数据进行增加维度,还可以选取部分的数据增加维度:

>>> array_add_axis=array[0:2,np.newaxis]
>>> array_add_axis
array([[[0.45284467, 0.27883581, 0.72870975, 0.03455946]],

       [[0.74005136, 0.52413785, 0.78433733, 0.80114353]]])

 

 

 

 

 

 

 

 

 

  • 12
    点赞
  • 40
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
import tensorflow as tf import numpy as np import gym # 创建 CartPole 游戏环境 env = gym.make('CartPole-v1') # 定义神经网络模型 model = tf.keras.models.Sequential([ tf.keras.layers.Dense(24, activation='relu', input_shape=(4,)), tf.keras.layers.Dense(24, activation='relu'), tf.keras.layers.Dense(2, activation='linear') ]) # 定义优化器和损失函数 optimizer = tf.keras.optimizers.Adam() loss_fn = tf.keras.losses.MeanSquaredError() # 定义超参数 gamma = 0.99 # 折扣因子 epsilon = 1.0 # ε-贪心策略的初始 ε 值 epsilon_min = 0.01 # ε-贪心策略的最小 ε 值 epsilon_decay = 0.995 # ε-贪心策略的衰减值 batch_size = 32 # 每个批次的样本数量 memory = [] # 记忆池 # 定义动作选择函数 def choose_action(state): if np.random.rand() < epsilon: return env.action_space.sample() else: Q_values = model.predict(state[np.newaxis]) return np.argmax(Q_values[0]) # 定义经验回放函数 def replay(batch_size): batch = np.random.choice(len(memory), batch_size, replace=False) for index in batch: state, action, reward, next_state, done = memory[index] target = model.predict(state[np.newaxis]) if done: target[0][action] = reward else: Q_future = np.max(model.predict(next_state[np.newaxis])[0]) target[0][action] = reward + Q_future * gamma model.fit(state[np.newaxis], target, epochs=1, verbose=0) # 训练模型 for episode in range(1000): state = env.reset() done = False total_reward = 0 while not done: action = choose_action(state) next_state, reward, done, _ = env.step(action) memory.append((state, action, reward, next_state, done)) state = next_state total_reward += reward if len(memory) > batch_size: replay(batch_size) epsilon = max(epsilon_min, epsilon * epsilon_decay) print("Episode {}: Score = {}, ε = {:.2f}".format(episode, total_reward, epsilon))next_state, reward, done, _ = env.step(action) ValueError: too many values to unpack (expected 4)优化代码
05-24

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值