新手强化学习实现,玩转马里奥(一)

安装马里奥游戏环境

在这里插入图片描述

在搭建强化学习模型之前,我们先要安装马里奥游戏环境,pip可直接下载
马里奥游戏环境pip install gym-super-mario-bros
gym动作控制模块pip install gym-contra
搜索pypi.org官网,查看游戏环境使用,游戏环境+gym-contra就可以直接当做gym被agent调用。其中gym_super_mario_bros是游戏环境(8个世界共32个关卡),JoypadSpace为动作环境选择。gym_super_mario_bros环境可以选择关卡及图形模式选择。
模式共有4种,标准standard,下采样downsample,像素pixel和马赛克方块rectangle。如下:
在这里插入图片描述
在这里插入图片描述
动作应用引用from gym_super_mario_bros.actions import SIMPLE_MOVEMENT

actions包下有3种已经配置好的按键模式RIGHT_ONLY、SIMPLE_MOVEMENT、COMPLEX_MOVEMENT,RIGHT_ONLY的按键只有右和右跳(单一训练,最简单的行为模式),SIMPLE_MOVEMENT增加了跳和左方向(训练复杂度中等),COMPLEX_MOVEMENT包含左右按键组合(暂时没必要增加训练复杂难度)。当然,行为也可以自己定义需要加入哪几个

导入基础游戏环境

我选择标准模式,环境应用代码为

env = gym_super_mario_bros.make("SuperMarioBros-1-1-v0")
env = JoypadSpace(env, SIMPLE_MOVEMENT)

对游戏环境进行加工

接下来我们还要对马里奥环境进行一点点简单的加工

图像下采样并灰度化

继承gym.ObservationWrapper类,对obs环境进行加工,使得返回的观察值是进行灰度化和下采样后的结果。这样的好处是不会影响到后面的跳帧和空间压缩操作

class ResizeObservation(gym.ObservationWrapper):
    """ResizeObservation将每个观察值下采样缩放且灰度"""
    def __init__(self, env, shape):
        super().__init__(env)
        if isinstance(shape, int): # 定义采样的长宽,下面的灰度会吧BGR三维通道压缩成两维度的
            self.shape = (shape, shape)
        else:
            self.shape = tuple(shape)
        obs_shape = self.shape + self.observation_space.shape[2:]
        self.observation_space = Box(low=0, high=255, shape=obs_shape, dtype=np.uint8)

    def observation(self, observation):
    	"""这里原本输出的观察值是BGR通道,shape=[240,256,3]"""
        image = cv2.cvtColor(observation, cv2.COLOR_BGR2GRAY)  # 用cv2进行灰度,原obs输出的是BGR格式。
        image = cv2.resize(image, (128, 120)) # 对图像进行缩放,缩小1倍
        image = paddle.to_tensor(image, dtype='float32') # 转化成paddle的张量,方便模型使用
        return image

实现跳帧操作

马里奥这游戏,跳过几帧并不会损失太多的信息,吧跳过的帧数奖励汇聚到最后一帧上,一般跳个1-2帧。这样可以增加采样的多样性,可以在相同容量的经验池里有更丰富的经验动作。

class SkipFrame(gym.Wrapper):
    """SkipFrame是可以实现跳帧操作。因为连续的帧变化不大,我们可以跳过n个中间帧而不会丢失太多信息。第n帧聚合每个跳过帧上累积的奖励。"""
    def __init__(self, env, skip):
        """Return only every `skip`-th frame"""
        super().__init__(env)
        self._skip = skip

    def step(self, action):
        """Repeat action, and sum reward"""
        total_reward = 0.0
        done = False
        for i in range(self._skip):
            obs, reward, done, info = self.env.step(action)
            total_reward += reward
            if done:
                break
        return obs, total_reward, done, info

压缩观察点

gym.wrappers有个官方的类FrameStack可以压缩连续多帧的环境到一个观察空间中,这样在之前的跳帧后也能观察到跳过的前几帧环境。在模型学习中也能通过前几帧来判断当前动态的状态(起跳/着陆)

# 直接使用官方类
from gym.wrappers import FrameStack
env = FrameStack(env, num_stack=3) # 这里压缩了3帧的观察空间

模型网络

马里奥游戏返回的环境是一个RGB通道的2D图形,所以这里用了3层卷积神经网络+2层全联接层来进行搭建。
3层卷积中还包含了2层归一化和池化层。
归一化层:为了更好的降低模型错误率,防止梯度消失和梯度爆炸。
池化层:降维。算力和显存足够的话可以不需要。

class DModel(parl.Model):
    def __init__(self, act, num_channels=64):
        """
        训练网络
        :param act: 输出的行动列表
        :param num_channels: 卷积层维度
        """
        super(DModel, self).__init__()
        self.conv1 = nn.Conv2D(in_channels=3, out_channels=num_channels, kernel_size=5, stride=1)
        self.avgP1 = nn.AvgPool2D(2)
        self.conv1_bn = nn.BatchNorm2D(num_features=num_channels)
        self.conv2 = nn.Conv2D(in_channels=num_channels, out_channels=num_channels, kernel_size=5, stride=1)
        self.avgP2 = nn.AvgPool2D(2)
        self.conv2_bn = nn.BatchNorm2D(num_features=num_channels)
        self.conv3 = nn.Conv2D(in_channels=num_channels, out_channels=num_channels, kernel_size=5, stride=1)
        self.conv3_bn = nn.BatchNorm2D(num_features=num_channels)
        self.flatten = nn.Flatten()
        self.fc1 = nn.Linear(36800, 512)
        self.fc2 = nn.Linear(512, act)

    def forward(self, x):
        x = F.selu(self.conv1(x))
        x = self.avgP1(x)
        x = self.conv1_bn(x)
        x = self.conv2(x)
        x = self.avgP2(x)
        x = self.conv2_bn(x)
        x = self.conv3(x)
        x = self.conv3_bn(x)
        x = self.flatten(x)
        x = F.selu(self.fc1(x))
        x = self.fc2(x)
        return x

Agent编写

agent动作与之前的dqn一样,可以复用

class Agent(parl.Agent):
    def __init__(self, algorithm, act_dim, e_greed=0.1, e_greed_decrement=0):
        super(Agent, self).__init__(algorithm)
        self.act_dim = act_dim
        self.global_step = 0
        self.update_target_steps = 200  # 每隔200个training steps再把model的参数复制到target_model中
        self.e_greed = e_greed  # 有一定概率随机选取动作,探索
        self.e_greed_decrement = e_greed_decrement  # 随着训练逐步收敛,探索的程度慢慢降低

    def sample(self, obs):
        """ 
        根据观测值 obs 采样(带探索)一个动作
        """
        sample = np.random.random()  # 产生0~1之间的小数
        if sample < self.e_greed:
            act = np.random.randint(self.act_dim)  # 探索:每个动作都有概率被选择
        else:
            act = self.predict(obs)  # 选择最优动作
        self.e_greed = max(
            0.01, self.e_greed - self.e_greed_decrement)  # 随着训练逐步收敛,探索的程度慢慢降低
        return act

    def predict(self, obs):
        """ 
        根据观测值 obs 选择最优动作
        """
        obs = np.expand_dims(obs, axis=0)  # 增加一维维度
        obs = paddle.to_tensor(list(obs), dtype='float32')
        pred_q = self.alg.predict(obs)
        pred_q = np.squeeze(pred_q, axis=0)
        act = pred_q.argmax().numpy()[0]  # 选择Q最大的下标,即对应的动作
        return act

    def learn(self, obs, act, reward, next_obs, terminal):
        """ 
        根据训练数据更新一次模型参数
        """
        if self.global_step % self.update_target_steps == 0:
            self.alg.sync_target()
        self.global_step += 1
        act = np.expand_dims(act, axis=-1)
        reward = np.expand_dims(reward, axis=-1)
        terminal = np.expand_dims(terminal, axis=-1)
        obs = paddle.to_tensor(obs, dtype='float32')
        act = paddle.to_tensor(act, dtype='int32')
        reward = paddle.to_tensor(reward, dtype='float32')
        next_obs = paddle.to_tensor(next_obs, dtype='float32')
        terminal = paddle.to_tensor(terminal, dtype='float32')
        loss = self.alg.learn(obs, act, reward, next_obs, terminal)  # 训练一次网络
        return loss.numpy()[0]

DDQN算法

话不多说,直接上代码

class DDQN(parl.Algorithm):
    def __init__(self, model, gamma=None, lr=None):
        """ DDQN algorithm

        Args:
            model (parl.Model): forward neural network representing the Q function.
            gamma (float): discounted factor for `accumulative` reward computation
            lr (float): learning rate.
        """
        # checks
        check_model_method(model, 'forward', self.__class__.__name__)
        assert isinstance(gamma, float)
        assert isinstance(lr, float)

        self.model = model
        self.target_model = copy.deepcopy(model)

        self.gamma = gamma
        self.lr = lr

        self.mse_loss = paddle.nn.MSELoss(reduction='mean')
        self.optimizer = paddle.optimizer.Adam(
            learning_rate=lr, parameters=self.model.parameters())

    def predict(self, obs):
        """ use self.model (Q function) to predict the action values
        """
        return self.model(obs)

    def learn(self, obs, action, reward, next_obs, terminal):
        """ update the Q function (self.model) with DDQN algorithm
        """
        # Q
        pred_values = self.model(obs)
        action_dim = pred_values.shape[-1]
        action = paddle.squeeze(action, axis=-1)
        action_onehot = paddle.nn.functional.one_hot(
            action, num_classes=action_dim)
        pred_value = paddle.multiply(pred_values, action_onehot)
        pred_value = paddle.sum(pred_value, axis=1, keepdim=True)

        with paddle.no_grad():
            # select greedy action base on Q: a` = argmax_a Q(x`, a)
            greedy_actions = self.model(next_obs).argmax(1)
            # get booststrapped next state value: Q_{target}(x`, a`)
            g_action_oh = paddle.nn.functional.one_hot(
                greedy_actions, num_classes=action_dim)
            max_v = self.target_model(next_obs).multiply(g_action_oh)
            max_v = max_v.sum(axis=1, keepdim=True)
            # get target value: y_i = r_i + gamma * Q_{target}(x`, a`)
            target = reward + (1 - terminal) * self.gamma * max_v

        loss = self.mse_loss(pred_value, target)

        # optimize
        self.optimizer.clear_grad()
        loss.backward()
        self.optimizer.step()
        return loss

    def sync_target(self):
        self.model.sync_weights_to(self.target_model)

训练部分

下面是训练部分的代码

训练episode

def run_train_episode(agent, env, rpm):
    total_reward = 0
    obs = env.reset()
    # obs = preprocess(obs)
    step = 0
    while True:
        step += 1
        action = agent.sample(obs)  # 采样动作,所有动作都有概率被尝试到
        next_obs, reward, done, _ = env.step(action)
        rpm.append((obs, action, reward, next_obs, done))
        # train model
        if (len(rpm) > MEMORY_WARMUP_SIZE) and (step % LEARN_FREQ == 0):
            # s,a,r,s',done
            (batch_obs, batch_action, batch_reward, batch_next_obs,
             batch_done) = rpm.sample(BATCH_SIZE)
            train_loss = agent.learn(batch_obs, batch_action, batch_reward,
                                     batch_next_obs, batch_done)
        total_reward += reward
        obs = next_obs
        # 这里每过100步,进行一次评估,评估是否在继续向前走,如果一直不动,就会结束游戏。
        # 但是结束游戏不会给惩罚,之前在这里设置了惩罚,会导致在部分场景下(被管道/墙拦住的时候,后续会不敢过去)
        if step % 100 == 0 and step * 0.5 > total_reward:
            break
        if done:
            break
    return total_reward

评估episode

# 评估 agent, 跑 5 个episode,总reward求平均
def run_evaluate_episodes(agent, env, render=False, runnum=5):
    eval_reward = []
    for i in range(runnum):
        obs = env.reset()
        step = 0
        episode_reward = 0
        while True:
            step += 1
            action = agent.predict(obs)  # 预测动作,只选最优动作
            obs, reward, done, _ = env.step(action)
            episode_reward += reward
            if render:
                env.render()
            if done:
                break
            if step % 100 == 0 and step*0.5 > episode_reward:
                break
        eval_reward.append(episode_reward)
    return np.mean(eval_reward)

运行主程序

超参设置

设置参数,学习率设置在0.0002,来降低波动,并且做了动态的调整
GAMMA不设置0.99,为了让后续的动作(比如跳水管、碰到怪物)不要影响太前面的动作

LEARN_FREQ = 16  # 训练频率,不需要每一个step都learn,攒一些新增经验后再learn,提高效率
MEMORY_SIZE = 50000  # replay memory的大小,越大越占用内存
MEMORY_WARMUP_SIZE = 5000  # replay_memory 里需要预存一些经验数据,再从里面sample一个batch的经验让agent去learn
BATCH_SIZE = 128  # 每次给agent learn的数据数量,从replay memory随机里sample一批数据出来
LEARNING_RATE = 0.0002  # 学习率
GAMMA = 0.95  # reward 的衰减因子,一般取 0.9 到 0.999 不等

运行主程序

def main():
    env = gym_super_mario_bros.make("SuperMarioBros-1-1-v0")
    env = JoypadSpace(env, SIMPLE_MOVEMENT)
    env = ResizeObservation(env, (128, 120))
    env = SkipFrame(env, skip=3)
    # FrameStack是一个包装器,它允许我们将连续的环境帧压缩到一个观察点中,以提供给我们的学习模型。
    # 这样,我们就可以根据马里奥在前几帧中的动作方向来判断他是在着陆还是在跳跃
    env = FrameStack(env, num_stack=3)
    act_dim = env.action_space.n
    print(act_dim)

    rpm = ReplayMemory(MEMORY_SIZE)  # DDQN的经验回放池
    model = DModel(act_dim)
    algorithm = parl.algorithms.DDQN(model, gamma=GAMMA, lr=LEARNING_RATE)
    agent = Agent(algorithm, act_dim, 0.15, 0.0000001) # 设置探索概率,初始为15% 

    # 先往经验池里存一些数据,避免最开始训练的时候样本丰富度不够
    logger.info('开始收集训练数据')
    while len(rpm) < MEMORY_WARMUP_SIZE:
        run_train_episode(agent, env, rpm)
    logger.info('训练数据{}条收集完毕'.format(MEMORY_WARMUP_SIZE))

    max_episode = 500000
    episode = 0
    runnum = 5
    render = True
    r1 = []
    max_reward = 500
    while episode < max_episode:  # 训练max_episode个回合,test部分不计算入episode数量
        total_50 = []
        for i in range(5):
            total_reward = run_train_episode(agent, env, rpm)
            episode += 1
            total_50.append(total_reward)
            if algorithm.lr >= 0.0001:
                algorithm.lr -= 0.0000005
        logger.info('episode:{},Test reward max:{},min{},mean{}'.format(
            episode, np.max(total_50), np.min(total_50), np.mean(total_50)))

        eval_reward = run_evaluate_episodes(agent, env, render=render, runnum=runnum)
        r1.append(eval_reward)
        logger.info(
            'episode:{},e_greed:{},Test reward:{},lr:{},max_reward:{}'.format(episode, agent.e_greed, eval_reward,
                                                                              algorithm.lr, max_reward))

        if eval_reward > max_reward:
            agent.save('./mlo_dqn_model')
            max_reward = eval_reward
            if algorithm.lr >= 0.00005: 每次有更好的模型,就降低学习率
                algorithm.lr = algorithm.lr*0.9
            logger.info('更新模型后重新收集训练数据')
            rpm.clear_all()
            while len(rpm) < MEMORY_WARMUP_SIZE:
                run_train_episode(agent, env, rpm)
            logger.info('训练数据{}条收集完毕'.format(MEMORY_WARMUP_SIZE))

    plt.plot(r1)
    plt.show()

第一阶段小结

目前,这个强化学习模型,大概训练在1W多次迭代后,能达到这一关卡3000+的分数并通关(如下图)。但是有些问题,机器并没有学会如何躲避怪物!如何跳过坑和水管!所以同样的模型无法使用在其他关卡!!!这就和最初想要做的略有出入。后续计划再改进模型和加入目标检测看看是否可以达到目的。也希望有大佬能给点建议。本人只是个还没入门的新手希望大佬们多多指教哇~

请添加图片描述
感谢:飞浆《世界冠军带你从零实践强化学习》提供的学习资料,感谢科科老师教学(b站@科科磕盐)

  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值