Deep Reinforcement Learning入门 - DQN/Policy Gradient实现LunarLander-v2

超参数设置参考:https://github.com/ranjitation/DQN-for-LunarLander/blob/master/dqn_agent.py

之前CartPole照着Deeplizard的教程做给做废了,于是换了OpenAI - Gym另外一个小游戏LunarLander,尝试自己从零实现DQN。

官方文档的描述如下:

Landing pad is always at coordinates (0,0). Coordinates are the first two numbers in state vector. Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points. Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt. Four discrete actions available: do nothing, fire left orientation engine, fire main engine, fire right orientation engine.

目标是让着陆器平稳降落在landing pad上,每个状态是一个8维向量,包括水平坐标x,垂直坐标y,水平速度,垂直速度,角度,角速度,腿1是否触地,腿2是否触地。一共4种动作,包括什么都不做,左引擎点火,右引擎点火,主引擎点火。

 

算法流程大致如下:
 

'''
训练时,外层for枚举当前回合。

    每回合,初始化环境和初始state,然后枚举当前回合的时间戳。

        对于每个单位时间,按照ε-greedy策略选定一个动作a,采取动作a并得到一组经验(s, a, r, s')。
        然后把当前状态更新为s'(没写导致多调了一个下午...),并把这组经验放进buffer。
        (每过一定单位时间)从buffer中sample出一个batch的经验用来optimize目前的policy_net,
        之后将policy_net的参数赋给target_net(之前写成赋给policy_net自己导致多改了一天bug....)。
        当前回合的total_reward加上这一个单位时间的reward。
        (为了加快训练,时间戳超过1000直接进入下一回合,防止着陆器长时间悬停在空中)
    
    进行一次epsilon decay。

测试时直接把epsilon设成0(只exploit不再explore)然后多跑一些回合算下total_reward平均值,超过200即可。
'''

 

总结一下还有两个小疑点--

1. 似乎对不同的case,训练policy_net,更新target_net以及epsilon decay的频率都不太一样?

2. 随机种子是真的玄学......

 

代码:

import random
import gym
import numpy as np
import matplotlib.pyplot as plt
from itertools import count

import torch

torch.cuda.current_device()
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

BUFFER_SIZE = 100000
BATCH_SIZE = 64
GAMMA = 0.99  # discount factor
LR = 5e-4
UPDATE_PERIOD = 4
EPS_ED = 0.01
EPS_DECAY = 0.99
SLIDE_LEN = 20
MAX_TIME = 1000

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

env = gym.make('LunarLander-v2')
env.seed(0)
random.seed(0)

class Net(nn.Module):
    def __init__(self, h1=128, h2=64):
        super(Net, self).__init__()
        self.seed = torch.manual_seed(0)
        self.fc1 = nn.Linear(8, h1)
        self.fc2 = nn.Linear(h1, h2)
        self.fc3 = nn.Linear(h2, 4)

    def forward(self, t):
        t = F.relu(self.fc1(t))
        t = F.relu(self.fc2(t))
        t = self.fc3(t)
        return t


class Experience:
    def __init__(self, cur_state, action, reward, nxt_state, done):
        self.cur_state = cur_state
        self.action = action
        self.reward = reward
        self.nxt_state = nxt_state
        self.done = done


class Buffer:
    def __init__(self):
        # random.seed(0)
        self.n = BUFFER_SIZE
        self.memory = [None for _ in range(BUFFER_SIZE)]
        self.pt = 0
        self.flag = 0  # to indicate whether the buffer can provide a batch of data

    def push(self, experience):
        self.memory[self.pt] = experience
        self.pt = (self.pt + 1) % self.n
        self.flag = min(self.flag + 1, self.n)

    def sample(self, sample_size):
        return random.sample(self.memory[:self.flag], sample_size)


class Agent:
    def __init__(self):
        # random.seed(0)
        self.eps = 1.0
        self.buff = Buffer()

        self.policy_net = Net()
        self.target_net = Net()
        self.optim = optim.Adam(self.policy_net.parameters(), lr=LR)
        self.update_networks()

        self.total_rewards = []
        self.avg_rewards = []

    def update_networks(self):
        self.target_net.load_state_dict(self.policy_net.state_dict())

    def update_experiences(self, cur_state, action, reward, nxt_state, done):
        experience = Experience(cur_state, action, reward, nxt_state, done)
        self.buff.push(experience)

    def sample_experiences(self):
        samples = self.buff.sample(BATCH_SIZE)
        for _, ele in enumerate(samples):
            if _ == 0:
                cur_states = ele.cur_state.unsqueeze(0)
                actions = ele.action
                rewards = ele.reward
                nxt_states = ele.nxt_state.unsqueeze(0)
                dones = ele.done
            else:
                cur_states = torch.cat((cur_states, ele.cur_state.unsqueeze(0)), dim=0)
                actions = torch.cat((actions, ele.action), dim=0)
                rewards = torch.cat((rewards, ele.reward), dim=0)
                nxt_states = torch.cat((nxt_states, ele.nxt_state.unsqueeze(0)), dim=0)
                dones = torch.cat((dones, ele.done), dim=0)
        return cur_states, actions, rewards, nxt_states, dones

    def get_action(self, state):
        rnd = random.random()
        if rnd > self.eps:  # exploit
            values = self.policy_net(state)
            act = torch.argmax(values, dim=0).item()
        else:
            act = random.randint(0, 3)
        return act

    def optimize_policy(self):
        criterion = nn.MSELoss()
        cur_states, actions, rewards, nxt_states, dones = self.sample_experiences()

        cur_states = cur_states.to(device).float()
        actions = actions.to(device).long()
        rewards = rewards.to(device).float()
        nxt_states = nxt_states.to(device).float()
        dones = dones.to(device)
        self.policy_net = self.policy_net.to(device)
        self.target_net = self.target_net.to(device)

        # for i in range(10):
        policy_values = torch.gather(self.policy_net(cur_states), dim=1, index=actions.unsqueeze(-1))
        with torch.no_grad():
            next_values = torch.max(self.target_net(nxt_states), dim=1)[0]
            target_values = rewards + GAMMA * next_values * (1 - dones)

        target_values = target_values.unsqueeze(1)

        self.optim.zero_grad()
        loss = criterion(policy_values, target_values)

        loss.backward()
        # print("Loss:", loss.item())
        self.optim.step()

        self.policy_net = self.policy_net.cpu()
        self.target_net = self.target_net.cpu()
        return loss.item()

    def train(self, episodes):
        for episode in range(episodes):
            total_reward = 0
            cur_state = env.reset()
            cur_state = torch.from_numpy(cur_state)
            for tim in count():
                action = self.get_action(cur_state)
                # img = env.render(mode='rgb_array')
                nxt_state, reward, done, _ = env.step(action)
                nxt_state = torch.from_numpy(nxt_state)
                action = torch.tensor(action).unsqueeze(-1)
                reward = torch.tensor(reward).unsqueeze(-1)
                done = torch.tensor(1 if done else 0).unsqueeze(-1)

                self.buff.push(Experience(cur_state, action, reward, nxt_state, done))
                cur_state = nxt_state  # !!!

                if self.buff.flag >= BATCH_SIZE and self.buff.pt % UPDATE_PERIOD == 0:
                    self.update_networks()
                    self.optimize_policy()

                total_reward += reward.item()
                if done or tim >= MAX_TIME:
                    self.update_rewards(total_reward)
                    break

            self.plot_rewards()

            if self.eps > EPS_ED:
                self.eps *= EPS_DECAY

        torch.save(self.policy_net.state_dict(), 'policy_net.pkl')

    def update_rewards(self, total_reward):
        self.total_rewards.append(total_reward)
        cur = len(self.total_rewards) - 1
        rewards = 0
        for i in range(cur, max(-1, cur - SLIDE_LEN), -1):
            rewards += self.total_rewards[i]
        avg = rewards / min(SLIDE_LEN, len(self.total_rewards))
        self.avg_rewards.append(avg)

    def plot_rewards(self):
        plt.clf()
        plt.xlabel('Episodes')
        plt.ylabel('Rewards')
        plt.plot(self.total_rewards, color='r', label='Current')

        plt.plot(self.avg_rewards, color='b', label='Average')
        plt.legend()
        plt.pause(0.001)
        print("Episode", len(self.total_rewards))
        print("Current reward", self.total_rewards[-1])
        print("Average reward", self.avg_rewards[-1])
        print("Epsilon", self.eps)
        plt.savefig('Train.jpg')

    def test(self, episodes):
        self.eps = 0
        ret = 0
        for episode in range(episodes):
            total_reward = 0
            cur_state = env.reset()
            cur_state = torch.from_numpy(cur_state)
            for tim in count():
                action = self.get_action(cur_state)
                img = env.render(mode='rgb_array')
                nxt_state, reward, done, _ = env.step(action)
                cur_state = torch.from_numpy(nxt_state)
                total_reward += reward
                if done or tim >= MAX_TIME:
                    break
            print("Episode", episode+1)
            print("Current reward", total_reward)
            ret += total_reward
        print("Average reward of", episodes, "episodes:", ret / episodes)

agent = Agent()
agent.train(700)
agent.test(100)


env.close()

训练的reward曲线如图:

测试结果:

......

Episode 96
Current reward 210.40330984897227
Episode 97
Current reward 269.30063656546673
Episode 98
Current reward 297.40313034589826
Episode 99
Current reward 242.37884580171982
Episode 100
Current reward 235.21898442946033
Average reward of 100 episodes: 245.67580368550185

 

(2021.4.11二更)

最近手撸了一个Policy Gradient,因为对“根据policy来sample”的理解出现偏差,断断续续debug了好几天......

# -*- coding: utf-8 -*-
"""LunarLander - PG.ipynb

Automatically generated by Colaboratory.

Original file is located at
    https://colab.research.google.com/drive/16U2WE7925uWv8FMKwyP_aY6QYsbAxFJ5
"""
import random
import gym
import numpy as np
import matplotlib.pyplot as plt
from itertools import count

import torch

import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F

env = gym.make('LunarLander-v2')
env.seed(0)
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)

MAX_TIME = 1000
LR = 3e-4
EPOCHS = 4000
SLIDE_LEN = 20
NOISE_RATE = 0.1
GAMMA = 0.99

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')


class Net(nn.Module):
    def __init__(self, h1=128, h2=128):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(8, h1)
        self.fc2 = nn.Linear(h1, h2)
        self.fc3 = nn.Linear(h2, 4)

    def forward(self, t):
        t = F.relu(self.fc1(t))
        t = F.relu(self.fc2(t))
        t = F.log_softmax(self.fc3(t), dim=0)
        return t


class Trajectory:
    def __init__(self):
        self.reward = 0
        self.rewards = []

    def __len__(self):
        return len(self.track)

    def push(self, cur_reward):
        self.reward += cur_reward
        self.rewards.append(cur_reward)


def get_suffix_sum(a, gamma=GAMMA):
    tmp = a[::-1]
    for i in range(1, len(tmp)):
        tmp[i] += tmp[i - 1] * gamma
    return tmp[::-1]


class Agent:
    def __init__(self):
        self.policy = Net()

        self.losses = []
        self.opt = optim.Adam(self.policy.parameters(), lr=LR)

        self.total_rewards = [0]
        self.avg_rewards = [0]
        self.action_space = [i for i in range(4)]

    def get_action(self, cur_state, mode='train'):
        output = self.policy(cur_state)
        # action = output.argmax()
        probs = torch.exp(output).detach().cpu().numpy()
        action = np.random.choice(self.action_space, p=probs)
        action = torch.tensor(action).long().to(device)
        # sample the action instead of taking the "optimal" so far
        output, action = output.unsqueeze(0), action.unsqueeze(0)
        criterion = nn.NLLLoss()
        loss = criterion(output, action)
        self.losses.append(loss)
        return action.item()

    def train_one_episode(self, device=device):
        self.policy.to(device)

        total_loss = 0
        self.losses.clear()

        cur_trajectory = Trajectory()
        cur_state = env.reset()
        cur_state = torch.from_numpy(cur_state).to(device)

        for tim in count():
            action = self.get_action(cur_state)
            # print(tim, action)
            # img = env.render(mode='rgb_array')
            nxt_state, reward, done, _ = env.step(action)
            nxt_state = torch.from_numpy(nxt_state).to(device)
            action = torch.tensor(action).unsqueeze(-1).to(device)
            reward = torch.tensor(reward).unsqueeze(-1).to(device)
            done = torch.tensor(1 if done else 0).unsqueeze(-1).to(device)

            cur_trajectory.push(reward.item())
            cur_state = nxt_state  # !!!

            if done or tim >= MAX_TIME:
                self.update_rewards(cur_trajectory.reward)
                break

        reward_weight = get_suffix_sum(cur_trajectory.rewards)
        reward_weight = torch.from_numpy(np.array(reward_weight)).to(device)

        # plot_tensor(np.array(cur_trajectory.rewards), 'rewards')
        # plot_tensor(reward_weight.cpu().numpy(), 'discounted_suffix_reward_weight')

        assert len(self.losses) == len(cur_trajectory.rewards)

        mean = reward_weight.mean()
        std = reward_weight.std()
        reward_weight = (reward_weight - mean) / std

        for i in range(len(self.losses)):
            total_loss += self.losses[i] * reward_weight[i]

        self.plot_rewards()

        self.opt.zero_grad()
        total_loss.backward()
        # for name, para in self.policy.named_parameters():
        #     print(name, para.grad.mean())
        self.opt.step()

        self.policy.cpu()
        torch.save(self.policy.state_dict(), 'policy.pkl')

    def update_rewards(self, total_reward):
        self.total_rewards.append(total_reward)
        cur = len(self.total_rewards) - 1
        rewards = 0
        for i in range(cur, max(-1, cur - SLIDE_LEN), -1):
            rewards += self.total_rewards[i]
        avg = rewards / min(SLIDE_LEN, len(self.total_rewards))
        self.avg_rewards.append(avg)

    def plot_rewards(self):
        plt.clf()
        plt.xlabel('Episodes')
        plt.ylabel('Rewards')
        plt.plot(self.total_rewards, color='g', label='Current')

        plt.plot(self.avg_rewards, color='b', label='Average')
        plt.legend()
        plt.pause(0.001)
        print("Episode", len(self.total_rewards))
        print("Current reward", self.total_rewards[-1])
        print("Average reward", self.avg_rewards[-1])
        plt.savefig('Train.jpg')

    def train(self, epochs=EPOCHS, device=device):
        for epoch in range(epochs):
            self.train_one_episode(device)

    def test(self, episodes, device=device):
        self.policy.load_state_dict(torch.load("policy.pkl"))
        self.policy.to(device)
        ret = 0
        for episode in range(episodes):
            total_reward = 0
            cur_state = env.reset()
            cur_state = torch.from_numpy(cur_state).to(device)
            for tim in count():
                action = self.get_action(cur_state)
                img = env.render(mode='rgb_array')
                nxt_state, reward, done, _ = env.step(action)
                cur_state = torch.from_numpy(nxt_state).to(device)
                total_reward += reward
                if done or tim >= MAX_TIME:
                    break
            print("Episode", episode + 1)
            print("Current reward", total_reward)
            ret += total_reward
        print("Average reward of", episodes, "episodes:", ret / episodes)


agent = Agent()
agent.train()
agent.test(20)

最后测试20论结果:

Episode 1
Current reward 84.33441682399996
Episode 2
Current reward 216.4304736195864
Episode 3
Current reward 120.18777822341227
Episode 4
Current reward 79.38344301251452
Episode 5
Current reward 149.77976608818616
Episode 6
Current reward 139.1168128341676
Episode 7
Current reward 254.51534398848398
Episode 8
Current reward 133.54801428683155
Episode 9
Current reward 186.94804010946848
Episode 10
Current reward 202.23091982552887
Episode 11
Current reward 108.69519273351803
Episode 12
Current reward 234.48558150706708
Episode 13
Current reward 233.7075148866764
Episode 14
Current reward 234.104787749662
Episode 15
Current reward 201.50699844779575
Episode 16
Current reward 235.9508429194292
Episode 17
Current reward 111.90205065489462
Episode 18
Current reward 123.62077742772023
Episode 19
Current reward 228.49487126388732
Episode 20
Current reward 130.76856178782705
Average reward of 20 episodes: 170.4856094095329

Process finished with exit code 0

其中reward不到200的基本也是平稳降落,只不过不知道为什么落地后没有关闭引擎......还在左右调整?有解决方案的大佬欢迎在评论区指点一下~

  • 0
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 8
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值