强化学习(DQN)教程REINFORCEMENT LEARNING (DQN) TUTORIAL

本教程显示了如何使用PyTorch在OpenAI Gym的CartPole-v0任务上训练深度Q学习(DQN)agent。

agent必须在两个动作之间做出决定-向左或向右移动推车-以便使与之相连的电线杆保持直立。您可以在Gym网站上找到具有各种算法和可视化效果的官方排行榜 。

 

 

cartpole

当代理观察环境的当前状态并选择一个动作时,环境会转换为新状态,并返回指示该动作后果的奖励。在此任务中,每增加一个时间步长,奖励为+1,并且如果杆子掉落得太远或手推车离中心的距离超过2.4个单位,则环境终止。这意味着性能更好的方案将持续更长的时间,从而积累更大的回报。

设计CartPole任务的目的是使对代理的输入是代表环境状态(位置,速度等)的4个实际值。但是,神经网络可以仅通过查看场景来解决任务,因此我们将使用以购物车为中心的一部分屏幕作为输入。因此,我们的结果无法直接与官方排行榜上的结果进行比较-我们的任务要艰巨得多。不幸的是,这确实减慢了训练速度,因为我们必须渲染所有框架。

严格来说,我们将状态显示为当前屏幕补丁与前一个屏幕补丁之间的差异。这将允许代理从一张图像考虑极点的速度。

Packages

首先,让我们导入所需的软件包。首先,我们需要 适用于环境的gym(使用pip install gym安装)。我们还将使用PyTorch中的以下内容:

  • 神经网络(torch.nn
  • 优化(torch.optim
  • 自动区分(torch.autograd
  • 视觉任务的实用程序(torchvision a separate package)。

 

import gymimport mathimport randomimport numpy as npimport matplotlibimport matplotlib.pyplot as pltfrom collections import namedtuplefrom itertools import countfrom PIL import Imageimport torchimport torch.nn as nnimport torch.optim as optimimport torch.nn.functional as Fimport torchvision.transforms as Tenv = gym.make('CartPole-v0').unwrapped# set up matplotlibis_ipython = 'inline' in matplotlib.get_backend()if is_ipython:    from IPython import displayplt.ion()# if gpu is to be useddevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")

 

Replay Memory

我们将使用经验重播记忆来训练我们的DQN。它存储代理观察到的转换,使我们以后可以重用此数据。通过从中随机抽样,可以建立批处理的转换相关。已经表明,这极大地稳定并改善了DQN训练程序。

为此,我们将需要两个类:

  • Transition-一个命名的元组,表示我们环境中的单个过渡。它本质上将(状态,动作)对映射到其(next_state,奖励)结果,状态是屏幕差异图像,如下所述。
  • ReplayMemory-有限大小的循环缓冲区,用于保存最近观察到的过渡。它还实现了.sample() 一种用于选择随机过渡批次进行训练的方法。
Transition = namedtuple('Transition',                        ('state', 'action', 'next_state', 'reward'))class ReplayMemory(object):def __init__(self, capacity):self.capacity = capacityself.memory = []self.position = 0def push(self, *args):"""Saves a transition."""if len(self.memory) < self.capacity:self.memory.append(None)self.memory[self.position] = Transition(*args)self.position = (self.position + 1) % self.capacitydef sample(self, batch_size):return random.sample(self.memory, batch_size)def __len__(self):return len(self.memory)

现在,让我们定义模型。但是首先,让我们快速回顾一下DQN是什么。

 

Q-network

我们的模型将是一个卷积神经网络,该卷积神经网络将吸收当前屏幕补丁和先前屏幕补丁之间的差异。它有两个输出,分别代表Q(s,left)

Q(s,left) 和 Q(s,right)

Q(s,right) (哪里 s

s是网络的输入)。实际上,网络正在尝试预测在给定当前输入的情况下采取每个操作的预期收益

class DQN(nn.Module):    def __init__(self, h, w, outputs):super(DQN, self).__init__()self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)self.bn1 = nn.BatchNorm2d(16)self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)self.bn2 = nn.BatchNorm2d(32)self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)self.bn3 = nn.BatchNorm2d(32)# Number of Linear input connections depends on output of conv2d layers# and therefore the input image size, so compute it.        def conv2d_size_out(size, kernel_size = 5, stride = 2):return (size - (kernel_size - 1) - 1) // stride  + 1        convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))        convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))        linear_input_size = convw * convh * 32self.head = nn.Linear(linear_input_size, outputs)# Called with either one element to determine next action, or a batch# during optimization. Returns tensor([[left0exp,right0exp]...]).    def forward(self, x):        x = F.relu(self.bn1(self.conv1(x)))        x = F.relu(self.bn2(self.conv2(x)))        x = F.relu(self.bn3(self.conv3(x)))return self.head(x.view(x.size(0), -1))

 

Input extraction

以下代码是用于从环境中提取和处理渲染图像的实用程序。它使用该torchvision程序包,可以轻松组成图像变换。一旦运行单元,它将显示提取的示例补丁。

resize = T.Compose([T.ToPILImage(),T.Resize(40, interpolation=Image.CUBIC),T.ToTensor()])def get_cart_location(screen_width):world_width = env.x_threshold * 2scale = screen_width / world_widthreturn int(env.state[0] * scale + screen_width / 2.0)  # MIDDLE OF CARTdef get_screen():    # Returned screen requested by gym is 400x600x3, but is sometimes larger    # such as 800x1200x3. Transpose it into torch order (CHW).screen = env.render(mode='rgb_array').transpose((2, 0, 1))    # Cart is in the lower half, so strip off the top and bottom of the screen_, screen_height, screen_width = screen.shapescreen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)]view_width = int(screen_width * 0.6)cart_location = get_cart_location(screen_width)if cart_location < view_width // 2:slice_range = slice(view_width)elif cart_location > (screen_width - view_width // 2):slice_range = slice(-view_width, None)else:slice_range = slice(cart_location - view_width // 2,cart_location + view_width // 2)    # Strip off the edges, so that we have a square image centered on a cartscreen = screen[:, :, slice_range]    # Convert to float, rescale, convert to torch tensor    # (this doesn't require a copy)screen = np.ascontiguousarray(screen, dtype=np.float32) / 255screen = torch.from_numpy(screen)    # Resize, and add a batch dimension (BCHW)return resize(screen).unsqueeze(0).to(device)env.reset()plt.figure()plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),interpolation='none')plt.title('Example extracted screen')plt.show()

 

Training

 

Hyperparameters and utilities

该单元实例化我们的模型及其优化器,并定义一些实用程序:

  • select_action-将根据epsilon贪婪策略选择一个动作。简而言之,我们有时会使用模型来选择动作,有时我们会统一采样。选择随机动作的概率始于EPS_START 并将朝指数衰减EPS_ENDEPS_DECAY 控制衰减率。
  • plot_durations-绘制情节持续时间以及最近100个情节的平均值的官方助手(官方评估中使用的度量)。该情节将在包含主要训练循环的单元格下方,并且将在每个情节之后更新。
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10
​
# Get screen size so that we can initialize layers correctly based on shape
# returned from AI gym. Typical dimensions at this point are close to 3x40x90
# which is the result of a clamped and down-scaled render buffer in get_screen()
init_screen = get_screen()
_, _, screen_height, screen_width = init_screen.shape
​
# Get number of actions from gym action space
n_actions = env.action_space.n
​
policy_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()
​
optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(10000)
​
​
steps_done = 0
​
​
def select_action(state):
    global steps_done
    sample = random.random()
    eps_threshold = EPS_END + (EPS_START - EPS_END) * \
        math.exp(-1. * steps_done / EPS_DECAY)
    steps_done += 1
    if sample > eps_threshold:
        with torch.no_grad():
            # t.max(1) will return largest column value of each row.
            # second column on max result is index of where max element was
            # found, so we pick action with the larger expected reward.
            return policy_net(state).max(1)[1].view(1, 1)
    else:
        return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)
​
​
episode_durations = []
​
​
def plot_durations():
    plt.figure(2)
    plt.clf()
    durations_t = torch.tensor(episode_durations, dtype=torch.float)
    plt.title('Training...')
    plt.xlabel('Episode')
    plt.ylabel('Duration')
    plt.plot(durations_t.numpy())
    # Take 100 episode averages and plot them too
    if len(durations_t) >= 100:
        means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
        means = torch.cat((torch.zeros(99), means))
        plt.plot(means.numpy())
​
    plt.pause(0.001)  # pause a bit so that plots are updated
    if is_ipython:
        display.clear_output(wait=True)
        display.display(plt.gcf())

 

Training loop

最后,是训练模型的代码。

在这里,你可以找到一个optimize_model执行最优化的单步功能。首先对一批进行采样,将所有张量连接成一个张量,然后计算Q(s

t

,a

t

)

Q(st,at) 和 V(s

t+1

)=max

a

Q(s

t+1

,a)

V(st+1)=maxaQ(st+1,a),并将它们合并到我们的损失中。通过定义我们设定V(s)=0

V(s)=0 如果 s

s是终端状态。我们还使用目标网络进行计算V(s

t+1

)

V(st+1)为了增加稳定性。目标网络的权重大部分时间保持冻结状态,但经常使用策略网络的权重进行更新。通常这是一组固定的步骤,但是为了简单起见,我们将使用情节。

def optimize_model():
    if len(memory) < BATCH_SIZE:
        return
    transitions = memory.sample(BATCH_SIZE)
    # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
    # detailed explanation). This converts batch-array of Transitions
    # to Transition of batch-arrays.
    batch = Transition(*zip(*transitions))
​
    # Compute a mask of non-final states and concatenate the batch elements
    # (a final state would've been the one after which simulation ended)
    non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
                                          batch.next_state)), device=device, dtype=torch.bool)
    non_final_next_states = torch.cat([s for s in batch.next_state
                                                if s is not None])
    state_batch = torch.cat(batch.state)
    action_batch = torch.cat(batch.action)
    reward_batch = torch.cat(batch.reward)
​
    # Compute Q(s_t, a) - the model computes Q(s_t), then we select the
    # columns of actions taken. These are the actions which would've been taken
    # for each batch state according to policy_net
    state_action_values = policy_net(state_batch).gather(1, action_batch)
​
    # Compute V(s_{t+1}) for all next states.
    # Expected values of actions for non_final_next_states are computed based
    # on the "older" target_net; selecting their best reward with max(1)[0].
    # This is merged based on the mask, such that we'll have either the expected
    # state value or 0 in case the state was final.
    next_state_values = torch.zeros(BATCH_SIZE, device=device)
    next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
    # Compute the expected Q values
    expected_state_action_values = (next_state_values * GAMMA) + reward_batch
​
    # Compute Huber loss
    loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))
​
    # Optimize the model
    optimizer.zero_grad()
    loss.backward()
    for param in policy_net.parameters():
        param.grad.data.clamp_(-1, 1)
    optimizer.step()

 

在下面,您可以找到主要的训练循环。首先,我们重置环境并初始化stateTensor。然后,我们对一个动作进行采样,执行它,观察下一个屏幕和奖励(总是1),并一次优化我们的模型。当情节结束时(我们的模型失败),我们重新开始循环。

下面,将num_episodes设置为较小。您应该下载笔记本并运行更多的片段,例如300多个片段,才能显着改善持续时间。

num_episodes = 50
for i_episode in range(num_episodes):
    # Initialize the environment and state
    env.reset()
    last_screen = get_screen()
    current_screen = get_screen()
    state = current_screen - last_screen
    for t in count():
        # Select and perform an action
        action = select_action(state)
        _, reward, done, _ = env.step(action.item())
        reward = torch.tensor([reward], device=device)


        # Observe new state
        last_screen = current_screen
        current_screen = get_screen()
        if not done:
            next_state = current_screen - last_screen
        else:
            next_state = None


        # Store the transition in memory
        memory.push(state, action, next_state, reward)


        # Move to the next state
        state = next_state


        # Perform one step of the optimization (on the target network)
        optimize_model()
        if done:
            episode_durations.append(t + 1)
            plot_durations()
            break
    # Update the target network, copying all weights and biases in DQN
    if i_episode % TARGET_UPDATE == 0:
        target_net.load_state_dict(policy_net.state_dict())
​
print('Complete')
env.render()
env.close()
plt.ioff()
plt.sho

这是说明总体结果数据流的图

 

可以随机选择或根据策略选择动作,从健身环境中获取下一步样本。我们将结果记录在重播内存中,并在每次迭代时运行优化步骤。优化会从重播内存中随机抽取一批来训练新策略。优化中还使用了“较旧”的target_net来计算期望的Q值;有时会对其进行更新以使其保持最新状态。

接下来,给大家介绍一下租用GPU做实验的方法,我们是在智星云租用的GPU,使用体验很好。具体大家可以参考:智星云官网: http://www.ai-galaxy.cn/,淘宝店:https://shop36573300.taobao.com/公众号: 智星AI

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值