DQN自动驾驶——python+gym实现

一、安装环境

gym是用于开发和比较强化学习算法的工具包,在python中安装gym库和其中子场景都较为简便。

安装gym:

pip install gym

安装自动驾驶模块,这里使用Edouard Leurent发布在github上的包highway-env(原链接):

pip install --user git+https://github.com/eleurent/highway-env

其中包含6个场景:

  • 高速公路——“highway-v0”
  • 汇入——“merge-v0”
  • 环岛——“roundabout-v0”
  • 泊车——“parking-v0”
  • 十字路口——“intersection-v0”
  • 赛车道——“racetrack-v0”

详细文档可以参考这里

二、配置环境

安装好后即可在代码中进行实验(以高速公路场景为例):

import gym
import highway_env
%matplotlib inline

env = gym.make('highway-v0')
env.reset()
for _ in range(3):
    action = env.action_type.actions_indexes["IDLE"]
    obs, reward, done, info = env.step(action)
    env.render()

运行后会在模拟器中生成如下场景:
绿色为ego vehicle
env类有很多参数可以配置,具体可以参考原文档。

三、训练模型

1、数据处理

(1)state

highway-env包中没有定义传感器,车辆所有的state (observations) 都从底层代码读取,节省了许多前期的工作量。根据文档介绍,state (ovservations) 有三种输出方式:Kinematics,Grayscale Image和Occupancy grid。

Kinematics

输出V*F的矩阵,V代表需要观测的车辆数量(包括ego vehicle本身),F代表需要统计的特征数量。
例:

Vehicle

x

y

v_x

v_y

ego-vehicle

5.0

4.0

15.0

0

vehicle1

-10.0

4.0

12.0

0

vehicle2

13.0

8.0

13.5

0

数据生成时会默认归一化,取值范围:[100, 100, 20, 20],也可以设置ego vehicle以外的车辆属性是地图的绝对坐标还是对ego vehicle的相对坐标。

在定义环境时需要对特征的参数进行设定:

config = 
{
    "observation": 
         {
        "type": "Kinematics",
        #选取5辆车进行观察(包括ego vehicle)
        "vehicles_count": 5,  
        #共7个特征
        "features": ["presence", "x", "y", "vx", "vy", "cos_h", "sin_h"], 
        "features_range": 
            {
            "x": [-100, 100],
            "y": [-100, 100],
            "vx": [-20, 20],
            "vy": [-20, 20]
            },
        "absolute": False,
        "order": 
  • 1
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
DQN(Deep Q-Network)是一种基于深度学习的强化学习算法。它通过神经网络估计每个动作的Q值,从而实现智能体对环境的决策。下面是DQN算法的Python实现步骤: 1.导入必要的库和环境 ```python import gym import numpy as np import tensorflow as tf from collections import deque env = gym.make('CartPole-v0') state_size = env.observation_space.shape action_size = env.action_space.n ``` 2.定义DQN模型,包括网络结构和训练方法 ```python class DQNAgent: def __init__(self, state_size, action_size): self.state_size = state_size self.action_size = action_size self.memory = deque(maxlen=2000) self.gamma = 0.95 self.epsilon = 1.0 self.epsilon_decay = 0.995 self.epsilon_min = 0.01 self.learning_rate = 0.001 self.model = self._build_model() def _build_model(self): model = tf.keras.models.Sequential() model.add(tf.keras.layers.Dense(24, input_dim=self.state_size, activation='relu')) model.add(tf.keras.layers.Dense(24, activation='relu')) model.add(tf.keras.layers.Dense(self.action_size, activation='linear')) model.compile(loss='mse', optimizer=tf.keras.optimizers.Adam(lr=self.learning_rate)) return model def remember(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) def act(self, state): if np.random.rand() <= self.epsilon: return np.random.choice(self.action_size) act_values = self.model.predict(state) return np.argmax(act_values) def replay(self, batch_size): minibatch = np.array(random.sample(self.memory, batch_size)) for state, action, reward, next_state, done in minibatch: target = reward if not done: target = (reward + self.gamma * np.amax(self.model.predict(next_state))) target_f = self.model.predict(state) target_f[action] = target self.model.fit(state, target_f, epochs=1, verbose=0) if self.epsilon > self.epsilon_min: self.epsilon *= self.epsilon_decay def load(self, name): self.model.load_weights(name) def save(self, name): self.model.save_weights(name) ``` 3.训练DQN模型并测试 ```python agent = DQNAgent(state_size, action_size) done = False batch_size = 32 EPISODES = 100 for e in range(EPISODES): state = env.reset() state = np.reshape(state, [1, state_size]) for time in range(500): action = agent.act(state) next_state, reward, done, _ = env.step(action) reward = reward if not done else -10 next_state = np.reshape(next_state, [1, state_size]) agent.remember(state, action, reward, next_state, done) state = next_state if done: print("episode: {}/{}, score: {}, e: {:.2}" .format(e, EPISODES, time, agent.epsilon)) break if len(agent.memory) > batch_size: agent.replay(batch_size) if e % 10 == 0: agent.save("./dqn.h5") # test the trained DQN model agent.load("./dqn.h5") state = env.reset() state = np.reshape(state, [1, state_size]) for time in range(500): env.render() action = agent.act(state) next_state, reward, done, _ = env.step(action) state = np.reshape(next_state, [1, state_size]) if done: break env.close() ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值