[强化学习] V2X 场景下 Agent 的构建

本文介绍了一种基于强化学习的资源分配方案,重点阐述了环境状态获取、动作预测、网络更新等核心环节,并详细解析了相关算法流程与实现细节。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

0.前言

本文自用,整理RL中各要素的实现方式及承载函数。

代码来自https://github.com/haoyye/ResourceAllocationReinforcementLearning

最终目的是清楚RL都需要哪些功能,端到端的算法要如何搭建,这些功能应放在哪个模块(函数)中实现

目前我能想到的RL功能为:

  • 从 env 中获取 state ok
  • 激活 NN 获取 action ok
  • 根据 action、env 更新 NN ok
  • 构造NN
  • 补1:由当前状态 s_t 和 action ,获取下一时刻的状态 s_(t+1) 和 reward ok
  • 补2:将上一状态、当前状态、奖励、动作存buffer ok
  • 补3:从buffer中采样并更新DQN参数 ok
  • 补4:更新target_q_network ok

1.从 env 中获取 state

1.1 get_state(self, idx)

    def get_state(self, idx):
        vehicle_number = len(self.env.vehicles)
        V2V_channel = (self.env.V2V_channels_with_fastfading[idx[0],self.env.vehicles[idx[0]].destinations[idx[1]],:] - 80)/60
        V2I_channel = (self.env.V2I_channels_with_fastfading[idx[0], :] - 80)/60
        V2V_interference = (-self.env.V2V_Interference_all[idx[0],idx[1],:] - 60)/60
        NeiSelection = np.zeros(self.RB_number)
        for i in range(3):
            for j in range(3):
                if self.training:
                    NeiSelection[self.action_all_with_power_training[self.env.vehicles[idx[0]].neighbors[i], j, 0 ]] = 1
                else:
                    NeiSelection[self.action_all_with_power[self.env.vehicles[idx[0]].neighbors[i], j, 0 ]] = 1
                   
        for i in range(3):
            if i == idx[1]:
                continue
            if self.training:
                if self.action_all_with_power_training[idx[0],i,0] >= 0:
                    NeiSelection[self.action_all_with_power_training[idx[0],i,0]] = 1
            else:
                if self.action_all_with_power[idx[0],i,0] >= 0:
                    NeiSelection[self.action_all_with_power[idx[0],i,0]] = 1
        time_remaining = np.asarray([self.env.demand[idx[0],idx[1]] / self.env.demand_amount])
        load_remaining = np.asarray([self.env.individual_time_limit[idx[0],idx[1]] / self.env.V2V_limit])
        #print('shapes', time_remaining.shape,load_remaining.shape)
        return np.concatenate((V2I_channel, V2V_interference, V2V_channel, NeiSelection, time_remaining, load_remaining))#,time_remaining))
 

传入参数为 idx:车-AP (用标号表示),所以可以知道它获取的是单个V2V链路的状态

这里要注意:在构造自己的state时,要记住我们的RL学习的是由一个车的state到aciton的映射。也就是说在buffer中,一个state也仅对应了一个车而非所有车。

代码中依次计算了此链路的V2V信道衰落、V2I信道衰落、受到的V2V干扰、V2V传输对象的选择、剩余时延、剩余负载

最后通过concatenate将其拼接起来。

2.激活NN获取action

2.1 predict(self, s_t, step, test_ep = False)

    def predict(self, s_t,  step, test_ep = False):
        # ==========================
        #  Select actions
        # ======================
        ep = 1/(step/1000000 + 1)
        if random.random() < ep and test_ep == False:   # epsion to balance the exporation and exploition
            action = np.random.randint(60)
        else:          
            action =  self.q_action.eval({self.s_t:[s_t]})[0] 
        return action

传入:当前状态 s_t、步数step、允许探索的flag test_ep

根据 step 获得 探索和开发 的权衡系数ep,判断ep与随机数的大小来决定探索还是开发

探索时action是一个随机数,开发时的action由 q_action.eval 决定,q_action.eval 在build_dqn()中定义,现在知道其功能为输入当前状态 s_t,返回动作 action。

3.存buffer + buffer中采样并更新DQN参数 + 更新target_q_network

    def observe(self, prestate, state, reward, action):
        self.memory.add(prestate, state, reward, action) # add the state and the action and the reward to the memory
        if self.step > 0:
            if self.step % 50 == 0:  # step 整除50时,从 buffer 中采样得到 batch 并训练
                print('Training')
                self.q_learning_mini_batch()            # training a mini batch
                #self.save_weight_to_pkl()
            # 当 step 接近 target_q_update_step 时,更新 target_q_network
            if self.step % self.target_q_update_step == self.target_q_update_step - 1:
                print("Update Target Q network:")
                self.update_target_q_network()           # 更新target_q_network

3.1 q_learning_mini_batch()

    def q_learning_mini_batch(self):

        # Training the DQN model
        # ------ 
        s_t, s_t_plus_1, action, reward = self.memory.sample()        
        t = time.time()   
        # 下面这个判断语句的目标是获取 target_q_t    
        if self.double_q:       #double Q learning   
            pred_action = self.q_action.eval({self.s_t: s_t_plus_1})       
            q_t_plus_1_with_pred_action = self.target_q_with_idx.eval({self.target_s_t: s_t_plus_1, self.target_q_idx: [[idx, pred_a] for idx, pred_a in enumerate(pred_action)]})            
            target_q_t =  self.discount * q_t_plus_1_with_pred_action + reward
        else:
            q_t_plus_1 = self.target_q.eval({self.target_s_t: s_t_plus_1})         
            max_q_t_plus_1 = np.max(q_t_plus_1, axis=1)
            target_q_t = self.discount * max_q_t_plus_1 +reward
        _, q_t, loss,w = self.sess.run([self.optim, self.q, self.loss, self.w], {self.target_q_t: target_q_t, self.action:action, self.s_t:s_t, self.learning_rate_step: self.step}) # training the network
        
        print('loss is ', loss)
        self.total_loss += loss
        self.total_q += q_t.mean()
        self.update_count += 1

3.1.1 首先通过 memory.sample 采样得到 batch_size 个样本,这部分代码在 replay_memory.py 中,如下

    def sample(self):
        indexes = []
        # 通过while,采样self.batch.size个样本,得到其index列表
        while len(indexes) < self.batch_size:
            index = random.randint(0, self.count - 1)
            indexes.append(index)

        # 通过index列表一次性提取多个样本值
        prestate = self.prestate[indexes]
        poststate = self.poststate[indexes]
        actions = self.actions[indexes]
        rewards = self.rewards[indexes]
        return prestate, poststate, actions, rewards

3.1.2 之后通过根据标志位 double_q 判断是否使用 Double DQN,其中调用了以下两个NN相关的函数

Double DQN: q_action.eval: 输入s_t_plus_1、返回pred_action、target_q_with_idx.eval: 输入 s_t_plus_1、pred_action,返回 q_t_plus_1_with_pred_action

DQN: target_q.eval:输入s_t_plus_1、返回 q_t_plus_1

3.1.3 self.sess.run(
[self.optim, self.q, self.loss, self.w],   // 模型系数
{self.target_q_t: target_q_t, self.action:action, self.s_t:s_t, self.learning_rate_step: self.step}  // 输入
)

3.2 update_target_q_network() 

    def update_target_q_network(self):    
        for name in self.w.keys():
            self.t_w_assign_op[name].eval({self.t_w_input[name]: self.w[name].eval()})  

其中用到了NN相关函数:t_w_assign_op[name].eval({self.t_w_input[name]: self.w[name].eval()})  

4 由当前状态 s_t 和 action ,获取下一时刻的状态 s_(t+1) 和 reward

4.1 act_for_training 的使用

这部分内容比较隐秘,因为他没有出现在agent中,考虑到涉及到 env 的更新,原coder将其放在了 env.py 中,在train函数中调用,如下图所示

    for i in range(len(self.env.vehicles)):              
        for j in range(3): 
            state_old = self.get_state([i,j]) 
            action = self.predict(state_old, self.step)                    
            #self.merge_action([i,j], action)   
            self.action_all_with_power_training[i, j, 0] = action % self.RB_number
            self.action_all_with_power_training[i, j, 1] = int(np.floor(action/self.RB_number))                                                    
            reward_train = self.env.act_for_training(self.action_all_with_power_training, [i,j]) 
            state_new = self.get_state([i,j]) 
            self.observe(state_old, state_new, reward_train, action)

上图可见,其在一个step中,对于每个 车-AP 都要执行一次,函数输入为全部车所对应的action,以及当前 车-AP 的idx

4.2 act_for_training 实现

在 act_for_training 内部,完成的工作主要有:计算单个 车-AP 的reward,更新位置、更新信道、更新干扰,代码如下图:

    def act_for_training(self, actions, idx):
        # =============================================
        # This function gives rewards for training
        # ===========================================
        rewards_list = np.zeros(self.n_RB)
        action_temp = actions.copy()
        self.activate_links = np.ones((self.n_Veh, 3), dtype='bool')
        # 重要:下面这句对动作空间进行了遍历! 
        V2I_rewardlist, V2V_rewardlist, time_left = self.Compute_Performance_Reward_Batch(action_temp, idx)
        self.renew_positions()
        self.renew_channels_fastfading()
        self.Compute_Interference(actions)
        rewards_list = rewards_list.T.reshape([-1])
        V2I_rewardlist = V2I_rewardlist.T.reshape([-1])
        V2V_rewardlist = V2V_rewardlist.T.reshape([-1])
        V2I_reward = (V2I_rewardlist[actions[idx[0], idx[1], 0] + 20 * actions[idx[0], idx[1], 1]] - \
                      np.min(V2I_rewardlist)) / (np.max(V2I_rewardlist) - np.min(V2I_rewardlist) + 0.000001)
        V2V_reward = (V2V_rewardlist[actions[idx[0], idx[1], 0] + 20 * actions[idx[0], idx[1], 1]] - \
                      np.min(V2V_rewardlist)) / (np.max(V2V_rewardlist) - np.min(V2V_rewardlist) + 0.000001)
        lambdda = 0.1
        # print ("Reward", V2I_reward, V2V_reward, time_left)
        t = lambdda * V2I_reward + (1 - lambdda) * V2V_reward
        # print("time left", time_left)
        # return t
        return t - (self.V2V_limit - time_left) / self.V2V_limit

这个函数计算了单个link的reward,所以比较简单的想法是,根据idx所表示链路的action来对其速率等进行计算之后生成reward,但是实际上并非如此,因为重要的一点在于:

Compute_Performance_Reward_Batch 函数返回的 V2I_rewardlist 是一个(n_RB * n_V2V_PowerList)的矩阵,通过后面对 Compute_Performance_Reward_Batch 函数 的分析可以知道,这个返回值的含义是,对 idx 尝试了所有可能的动作空间,对每一个可能的 RB 和 发送功率 都遍历了一遍,由此生成了 V2I_rewardlist ,这个变量中的每一个成员都是在行标和列表所示动作下的系统级性能指标(在这里系统级的V2I速率就是把所有link的速率求和了)。

再往后看V2I_reward的计算,他是由 (当前的动作选择带来的系统级指标 - 最小的系统级指标) / (最大的系统级指标 - 最小的系统级指标)得到的。

【env.Compute_Performance_Reward_Batch】

act_for_training中出现的函数,其输入为 action 和 车-AP对(单个):表示方式为idx:车号-AP号

返回值为:

  • V2I_rewardlist:存储V2I速率(n_RB * # V2V_power_dB_List)
  • V2V_rewardlist:以相反数的形式存储V2V链路的传输剩余数据包的大小(n_RB * # V2V_power_dB_List)
  • time_left:idx链路所对应的剩余时间

代码在下面给出,这段115行的代码真的非常绕,在此我们首先要搞明白返回值的意义。

注意到返回值的行数为RB的数目,列数为V2V可能的发射功率的数目【表G】,相比于之前的表(行数=车数,列数=APG成员数,一个格子对应一个链路)不同,这个表中存储的数值有俩特点:

  1. 存储的是在idx的链路 在 行标所示的RB 下以 列表所示的功率传输 时的情况,也就是尝试了所有动作组合的可能性。
  2. 存储的内容是:系统中的所有 V2V/I链路 的速率和。也就是说是一个系统级别的指标,进而我们可以知道要填写表G中的一个格子需要另外两个表(分别存储所有链路的接收功率和干扰)来支撑

由特点2可知:我们需要系统中所有V2V/I链路的速率和,而求速率需要知道 所有链路 的接收功率和干扰,因此我们需要一个表格来存储 所有链路的接收功率和干扰【表R 表I】,这个表格需要以车数为行数,APG成员数为列数。

由特点1可知,表G中的内容是在 idx 遍历其动作空间的前提下得到的,因此在idx遍历的过程中,表R和表I肯定要发生变化

  • 表R:接收功率与其他链路无关,但是对于idx来说,其接收功率和RB有关(因为接收功率与fastfading有关,后者与RB有关),此时仅需要对idx的接收功率进行重新计算。
  • 表I:当idx遍历到 RB x 时,势必会对 x号RB上的全部 link 造成干扰,此时需要重新计算x号RB上全部链路的干扰。

所以我们知道对于表R和表I来说,当idx遍历动作空间时,大部分内容是不变的,所以我们可以先把表R和表I的内容计算出来,在idx遍历时仅对部分内容做修改即可。

    def Compute_Performance_Reward_Batch(self, actions_power, idx):  # add the power dimension to the action selection
        # ==================================================
        # ------------- Used for Training ----------------
        # ==================================================
        actions = actions_power.copy()[:, :, 0]  #
        power_selection = actions_power.copy()[:, :, 1]  #

        V2V_Interference = np.zeros((len(self.vehicles), 3))  # 存储每个V2V受到的干扰
        V2V_Signal = np.zeros((len(self.vehicles), 3))  # 存储每个V2V链路的接收功率

        Interfence_times = np.zeros((len(self.vehicles), 3))  # 3 neighbors
        # print(actions)
        origin_channel_selection = actions[idx[0], idx[1]]
        actions[idx[0], idx[1]] = 100  # something not relavant

        # 填写 V2V_Interference V2V_Signal # 计算表R 表I
        for i in range(self.n_RB):  # 遍历每个RB
            indexes = np.argwhere(actions == i)
            # print('index',indexes)
            for j in range(len(indexes)):  # 对该RB上的每辆车
                receiver_j = self.vehicles[indexes[j, 0]].destinations[indexes[j, 1]]  #
                V2V_Signal[indexes[j, 0], indexes[j, 1]] = 10 ** (
                            (self.V2V_power_dB_List[power_selection[indexes[j, 0], indexes[j, 1]]] - \
                             self.V2V_channels_with_fastfading[
                                 indexes[j, 0], receiver_j, i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                V2V_Interference[indexes[j, 0], indexes[j, 1]] += 10 ** (
                            (self.V2I_power_dB - self.V2V_channels_with_fastfading[i, receiver_j, i] + \
                             2 * self.vehAntGain - self.vehNoiseFigure) / 10)  # interference from the V2I links

                for k in range(j + 1, len(indexes)):
                    receiver_k = self.vehicles[indexes[k, 0]].destinations[indexes[k, 1]]
                    V2V_Interference[indexes[j, 0], indexes[j, 1]] += 10 ** (
                                (self.V2V_power_dB_List[power_selection[indexes[k, 0], indexes[k, 1]]] - \
                                 self.V2V_channels_with_fastfading[
                                     indexes[k, 0], receiver_j, i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                    V2V_Interference[indexes[k, 0], indexes[k, 1]] += 10 ** (
                                (self.V2V_power_dB_List[power_selection[indexes[j, 0], indexes[j, 1]]] - \
                                 self.V2V_channels_with_fastfading[
                                     indexes[j, 0], receiver_k, i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                    Interfence_times[indexes[j, 0], indexes[j, 1]] += 1
                    Interfence_times[indexes[k, 0], indexes[k, 1]] += 1

        # 更新env里面的V2V干扰
        self.V2V_Interference = V2V_Interference + self.sig2

        # 初始化表G
        V2V_Rate_list = np.zeros((self.n_RB, len(self.V2V_power_dB_List)))  # the number of RB times the power level
        Deficit_list = np.zeros((self.n_RB, len(self.V2V_power_dB_List)))

        # idx遍历动作空间,在新的条件下重新计算表R 表I
        for i in range(self.n_RB):
            indexes = np.argwhere(actions == i)
            V2V_Signal_temp = V2V_Signal.copy()
            # receiver_k = self.vehicles[idx[0]].neighbors[idx[1]]
            receiver_k = self.vehicles[idx[0]].destinations[idx[1]]
            for power_idx in range(len(self.V2V_power_dB_List)):  # 对每一种power,计算V2V接收信号和干扰
                V2V_Interference_temp = V2V_Interference.copy()

                ## 以下这段算idx所对应V2V的,所有可能的发射功率下可能的接收信号
                V2V_Signal_temp[idx[0], idx[1]] = 10 ** ((self.V2V_power_dB_List[power_idx] - \
                                                          self.V2V_channels_with_fastfading[
                                                              idx[0], self.vehicles[idx[0]].destinations[idx[1]], i]
                                                          + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)

                ## 计算idx对应的 V2V 受到的 AP 的干扰,V2Vchannel_with_fastfading中首个index是i是因为,每个V2I占用一个RB,其序号和RB序号一致
                V2V_Interference_temp[idx[0], idx[1]] += 10 ** ((self.V2I_power_dB - \
                                                                 self.V2V_channels_with_fastfading[
                                                                     i, self.vehicles[idx[0]].destinations[idx[1]], i]
                                                                 + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                ## 计算 所有 V2V所受到的V2V的干扰
                for j in range(len(indexes)):  # 对该RB上的链路
                    receiver_j = self.vehicles[indexes[j, 0]].destinations[indexes[j, 1]]
                    V2V_Interference_temp[idx[0], idx[1]] += 10 ** (
                                (self.V2V_power_dB_List[power_selection[indexes[j, 0], indexes[j, 1]]] - \
                                 self.V2V_channels_with_fastfading[
                                     indexes[j, 0], receiver_k, i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                    V2V_Interference_temp[indexes[j, 0], indexes[j, 1]] += 10 ** ((self.V2V_power_dB_List[power_idx] - \
                                                                                   self.V2V_channels_with_fastfading[
                                                                                       idx[0], receiver_j, i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                V2V_Rate_cur = np.log2(1 + np.divide(V2V_Signal_temp, V2V_Interference_temp))
                if (origin_channel_selection == i) and (power_selection[idx[0], idx[1]] == power_idx):
                    V2V_Rate = V2V_Rate_cur.copy()

                V2V_Rate_list[i, power_idx] = np.sum(V2V_Rate_cur)
                Deficit_list[i, power_idx] = 0 - 1 * np.sum(np.maximum(np.zeros(V2V_Signal_temp.shape), (
                            self.demand - self.individual_time_limit * V2V_Rate_cur * 1500)))  # max是为了排除负值


        Interference = np.zeros(self.n_RB)  # 这里的Interference是针对RB来说的
        V2I_Rate_list = np.zeros((self.n_RB, len(self.V2V_power_dB_List)))  # 3 of power level
        for i in range(len(self.vehicles)):
            for j in range(len(actions[i, :])):
                if (i == idx[0] and j == idx[1]):  #
                    continue
                Interference[actions[i][j]] += 10 ** ((self.V2V_power_dB_List[power_selection[i, j]] - \
                                                       self.V2I_channels_with_fastfading[i, actions[i][
                                                           j]] + self.vehAntGain + self.bsAntGain - self.bsNoiseFigure) / 10)

        V2I_Interference = Interference + self.sig2  # AP受到来自V的干扰

        for i in range(self.n_RB):
            for j in range(len(self.V2V_power_dB_List)):
                V2I_Interference_temp = V2I_Interference.copy()
                # AP受到的来自AP的干扰
                V2I_Interference_temp[i] += 10 ** ((self.V2V_power_dB_List[j] - self.V2I_channels_with_fastfading[
                    idx[0], i] + self.vehAntGain + self.bsAntGain - self.bsNoiseFigure) / 10)
                # 不同发射功率下V2I速率计算
                V2I_Rate_list[i, j] = np.sum(
                    np.log2(1 + np.divide(10 ** ((self.V2I_power_dB + self.vehAntGain + self.bsAntGain
                                                  - self.bsNoiseFigure - self.V2I_channels_abs[0:min(self.n_RB, self.n_Veh)]) / 10),
                                          V2I_Interference_temp[0:min(self.n_RB, self.n_Veh)])))

        self.demand -= V2V_Rate * self.update_time_train * 1500
        self.test_time_count -= self.update_time_train
        self.individual_time_limit -= self.update_time_train
        self.individual_time_limit[np.add(self.individual_time_limit <= 0, self.demand < 0)] = self.V2V_limit
        self.demand[self.demand < 0] = self.demand_amount
        if self.test_time_count == 0:
            self.test_time_count = 10
        return V2I_Rate_list, Deficit_list, self.individual_time_limit[idx[0], idx[1]]

 

5 构造训练流程

 首先看到除了初始化参数外,全部内容均包括在一次 step 的循环中,循环内部依次进行了:

  1. 当 step 整除2000时重置game
  2. 对每一个 车-AP对 ,执行:
    1. 获取状态
    2. 获取action
    3. 存储action到action的table中
    4. 获取reward并根据action更新环境
    5. Observe:存buffer + buffer中采样并更新DQN参数 + 更新target_q_network
  3. 当step整除2000时:(执行测试,目标是填两个表:V2I_Rate_list/Fail_percent_list,尺寸:[测试数])
    1. 当step较小时,进行10次测试;step整除10000时,测试50次;step=38000时,测试100次
    2. 每次测试中:
      1. 更新环境
      2. 对每个 车-AP :get_state、predict、merge_action
      3. 每过10辆车 :【5.1 act_asyn】返回reward、percent
      4. 将reward求和放到 Rate_list中
      5. 将 Rate_list 内容平均后填表
      6. 将percent内容直接填表

5.1 env.act_asyn

此函数在测试时,遍历所有车,每遍历1/10的车进入一次,其内部存在一个计数器self.n_step,其整10时更新位置和信道。

平时进入时调用以下函数:

env.Compute_Performance_Reward_fast_fading_with_power_asyn
# 输入action全体,进行通信计算后输出 V2I_Rate, fail_percent
env.Compute_Interference
# 输入action全体,在内部更新 env.V2V_Interference_all

对于 Compute_Performance_Reward_fast_fading_with_power_asyn ,恭喜您,这又是一个又臭又长的函数,下面我们把它作为 5.2 来仔细讨论一下

5.2 Compute_Performance_Reward_fast_fading_with_power_asyn

首先确认一下,此函数的输入是全部链路的 aciton,遍历所有车时,一共进入10次。

返回值为

  • V2I 速率:V2I_Rate
  • V2V 的传输失败率:fail_percent

其中主要工作分作三部分:

  • 计算 V2I 速率 
  • 计算 V2V 速率
  • 更新env中记录的变量。其中主要是根据 V2V 速率计算 V2V 剩余传输数据量
    def Compute_Performance_Reward_fast_fading_with_power_asyn(self,
                                                               actions_power):  # revising based on the fast fading part
        # ===================================================
        #         --------- Used for Testing -------
        # ===================================================
        actions = actions_power[:, :, 0]  # the channel_selection_part
        power_selection = actions_power[:, :, 1]
        
        # 计算V2I速率
        Interference = np.zeros(self.n_RB)  # Calculate the interference from V2V to V2I
        for i in range(len(self.vehicles)):
            for j in range(len(actions[i, :])):
                if not self.activate_links[i, j]:
                    continue
                Interference[actions[i][j]] += 10 ** ((self.V2V_power_dB_List[power_selection[i, j]] - \
                                                       self.V2I_channels_with_fastfading[i, actions[i, j]] + \
                                                       self.vehAntGain + self.bsAntGain - self.bsNoiseFigure) / 10)
        self.V2I_Interference = Interference + self.sig2
        V2I_Signals = self.V2I_power_dB - self.V2I_channels_abs[0:min(self.n_RB,
                                                                      self.n_Veh)] + self.vehAntGain + self.bsAntGain - self.bsNoiseFigure
        V2I_Rate = np.log2(1 + np.divide(10 ** (V2I_Signals / 10), self.V2I_Interference[0:min(self.n_RB, self.n_Veh)]))

        # 计算V2V速率
        V2V_Interference = np.zeros((len(self.vehicles), 3))
        V2V_Signal = np.zeros((len(self.vehicles), 3))
        Interfence_times = np.zeros((len(self.vehicles), 3))
        actions[(np.logical_not(self.activate_links))] = -1
        for i in range(self.n_RB):
            indexes = np.argwhere(actions == i)
            for j in range(len(indexes)):
                # receiver_j = self.vehicles[indexes[j,0]].neighbors[indexes[j,1]]
                receiver_j = self.vehicles[indexes[j, 0]].destinations[indexes[j, 1]]
                V2V_Signal[indexes[j, 0], indexes[j, 1]] = 10 ** (
                            (self.V2V_power_dB_List[power_selection[indexes[j, 0], indexes[j, 1]]] - \
                             self.V2V_channels_with_fastfading[indexes[j][0]][receiver_j][
                                 i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                # V2V_Signal[indexes[j, 0],indexes[j, 1]] = 10**((self.V2V_power_dB_List[0] - self.V2V_channels_with_fastfading[indexes[j][0]][receiver_j][i])/10)
                if i < self.n_Veh:
                    V2V_Interference[indexes[j, 0], indexes[j, 1]] += 10 ** ((self.V2I_power_dB - \
                                                                              self.V2V_channels_with_fastfading[i][
                                                                                  receiver_j][
                                                                                  i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)  # V2I links interference to V2V links
                for k in range(j + 1, len(indexes)):
                    receiver_k = self.vehicles[indexes[k][0]].destinations[indexes[k][1]]
                    V2V_Interference[indexes[j, 0], indexes[j, 1]] += 10 ** (
                                (self.V2V_power_dB_List[power_selection[indexes[k, 0], indexes[k, 1]]] - \
                                 self.V2V_channels_with_fastfading[indexes[k][0]][receiver_j][
                                     i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                    V2V_Interference[indexes[k, 0], indexes[k, 1]] += 10 ** (
                                (self.V2V_power_dB_List[power_selection[indexes[j, 0], indexes[j, 1]]] - \
                                 self.V2V_channels_with_fastfading[indexes[j][0]][receiver_k][
                                     i] + 2 * self.vehAntGain - self.vehNoiseFigure) / 10)
                    Interfence_times[indexes[j, 0], indexes[j, 1]] += 1
                    Interfence_times[indexes[k, 0], indexes[k, 1]] += 1
        self.V2V_Interference = V2V_Interference + self.sig2
        V2V_Rate = np.log2(1 + np.divide(V2V_Signal, self.V2V_Interference))
        # print("V2I information", V2I_Signals, self.V2I_Interference, V2I_Rate)


        # 下面更新一堆env中的变量
        
        # -- compute the latency constraits --
        self.demand -= V2V_Rate * self.update_time_asyn * 1500  # decrease the demand
        self.test_time_count -= self.update_time_asyn  # compute the time left for estimation
        self.individual_time_limit -= self.update_time_asyn  # compute the time left for individual V2V transmission
        self.individual_time_interval -= self.update_time_asyn  # compute the time interval left for next transmission

        # --- update the demand ---
        new_active = self.individual_time_interval <= 0
        self.activate_links[new_active] = True
        self.individual_time_interval[new_active] = np.random.exponential(0.02, self.individual_time_interval[
            new_active].shape) + self.V2V_limit
        self.individual_time_limit[new_active] = self.V2V_limit
        self.demand[new_active] = self.demand_amount

        # -- update the statistics---
        early_finish = np.multiply(self.demand <= 0, self.activate_links)
        unqulified = np.multiply(self.individual_time_limit <= 0, self.activate_links)
        self.activate_links[np.add(early_finish, unqulified)] = False
        self.success_transmission += np.sum(early_finish)
        self.failed_transmission += np.sum(unqulified)
        fail_percent = self.failed_transmission / (self.failed_transmission + self.success_transmission + 0.0001)
        return V2I_Rate, fail_percent

6.构造NN

build_dqn(self)

7.测试demo

play(self, n_step = 100, n_episode = 100, test_ep = None, render = False)

 

2.merge_action(self, idx, action)

 

10.save_weight_to_pkl(self):

11.load_weight_from_pkl(self)

 

 

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值