【无线感知】【P8】WIFI 感知实战-3【PyTorch】

前言:

      训练种存在顿悟现象,前面100 epoch 前 acc 一直不高,30%-60%。100以后acc 突然变成了

97% 以上。论文里面提供的是Tensor 版本的,这里面更改为PyTorch 版本

               人类活动识别受到极大关注,近年来由于大量的应用
旨在监控人类的运动和行为室内区域。 健康监测和跌倒等应用老年人检测[1]、情境意识、活动
对智能家居 [2] 等领域能源效率的认可其他基于物联网 (IoT) 的应用程序 [3]。
在现有系统中,个人必须佩戴设备配备陀螺仪和加速度计等运动传感器。
         传感器数据在可穿戴设备上本地处理设备或传输到服务器进行特征提取和然后使用监督学习算法进行分类。这种类型的监控称为主动监控。这该系统的性能显示为 90% 左右识别睡眠、坐着、站立等活动,走路、跑步。然而,总是佩戴设备很麻烦,而且对于许多被动活动识别来说可能是不可能的,应用程序,其中人可能不携带任何传感器或无线设备。虽然可以使用基于摄像头的系统,对于被动活动识别,视线 (LOS) 要求是此类系统的主要限制。此外,基于摄像头的方法存在隐私问题,无法可在许多环境中使用。因此,被动,基于无线信号的监控系统,不侵犯人们的隐私,是人们所希望的。

         由于室内区域无处不在,最近,WiFi 一直是许多活动识别研究的焦点。此类系统由 WiFi 接入点 (AP) 和位于不同地点的一台或多台支持 WiFi 的设备环境的。当一个人从事某项活动时,身体运动会影响无线信号并改变系统的多路径配置文件。

  

A. 基于Wi-Fi信号功率的技术接收信号强度(RSSI)已成功使用使用 WiFi 主动定位无线设备指纹技术如[5]中所总结。 RSSI 有也被用作移动设备被动跟踪的指标对象[6]。当人位于 WiFi 之间时
设备和接入点(AP),信号会衰减因此观察到不同的 RSSI。虽然RSSI很使用简单,易于测量,但无法捕获
由于人的运动而导致信号的真实变化。这是因为 RSSI 并不是一个稳定的指标,即使存在环境没有动态变化

      整个项目数据集非常大,务必要放在GPU 或者TPU 上面跑。

  1. from google.colab import drive

  2. drive.mount('/content/drive')

  3. path ="content/drive/AI_DataStall"

  4. os.chdir(path)

  5. %run main.py



目录:

  1.   无线感知概述
  2.    加载数据集
  3.    训练部分

一 无线感知概述

     无线感知项目主要分为下面三个步骤:

     1: 数据预处理

           label:    one-hot 编码

           input:   滑动窗口分帧,windows_size =1000

          无线感知数据预处理是核心,跟图像和音频不一样,其幅度和相位变化有对应的物理意义以及数学模型,有很多方案进行预处理。比如共轭天线相乘.

      2: 数据降维

          重点,这个项目中方案:

           1s有1000行的数据,可以间隔2行采样。

       3: 通过神经网络进行模式识别

    

    


二  加载数据集

   

# -*- coding: utf-8 -*-
"""
Created on Sat Jun 15 10:10:54 2024

@author: cxf
"""

from sklearn.datasets import load_breast_cancer
import numpy as np
from torch.utils.data import Dataset
from torch.utils.data import random_split
import torch
from datetime import datetime

class MyCustomDataset(Dataset):
    #定义自己的数据集
    def __init__(self):
        # 初始化数据集,加载数据等
        
        x_bed, x_fall, x_pickup, x_run, x_sitdown, x_standup, x_walk, \
        y_bed, y_fall, y_pickup, y_run, y_sitdown, y_standup, y_walk = dataImport()
        
        

        self.data   =   torch.cat((x_bed, x_fall, x_pickup, x_run, x_sitdown, x_standup, x_walk), dim=0)
        self.labels = torch.cat((y_bed, y_fall, y_pickup, y_run, y_sitdown, y_standup, y_walk), dim=0)
        print(self.data.shape)
        print(self.labels.shape)
        #print(self.data.shape,type(self.data))

        
 
    def __len__(self):
        # 返回数据集的大小
        return len(self.data)
 
    def __getitem__(self, idx):
        # 获取一个样本
        return self.data[idx], self.labels[idx]  



    


def txt_import():
    #从csv 文件中加载数据
    x_dic = {}
    y_dic = {}
    print("txt file importing...")

    beg_time = datetime.now()
    for i in ["bed", "fall", "pickup", "run", "sitdown", "standup", "walk"]:
        
        label =str(i)
        start_time = datetime.now()
  
        xx_txt = "./input_files/xx_1000_60_txt" + label + ".csv"
        yy_txt = "./input_files/yy_1000_60_txt" + label + ".csv"
        
        
    
        arrXX = np.loadtxt(xx_txt,  delimiter=',',dtype=np.float32) 
        arrYY = np.loadtxt(yy_txt,  delimiter=',',dtype=np.int8) 
        arrXX = arrXX.reshape(-1, 500,90)
        time_interval = datetime.now()-start_time
        print(label, "\t",time_interval.seconds,"秒","\t xx.shape:",np.shape(arrXX),"\t yy.shape",np.shape(arrYY))
        x_dic[label]=arrXX
        y_dic[label]=arrYY
    
    time_inverval = datetime.now()-beg_time
    print("\n txt_import 总耗时: ",time_inverval.seconds, "秒")
    return x_dic["bed"], x_dic["fall"], x_dic["pickup"], x_dic["run"], x_dic["sitdown"], x_dic["standup"], x_dic["walk"], \
        y_dic["bed"], y_dic["fall"], y_dic["pickup"], y_dic["run"], y_dic["sitdown"], y_dic["standup"], y_dic["walk"]
 
        
def dataImport():

    #导入数据
  
    x_bed, x_fall, x_pickup, x_run, x_sitdown, x_standup, x_walk, \
    y_bed, y_fall, y_pickup, y_run, y_sitdown, y_standup, y_walk = txt_import()              
    print(" bed =",len(x_bed), " fall=", len(x_fall), " pickup =", len(x_pickup), " run=", len(x_run), " sitdown=", len(x_sitdown), " standup=", len(x_standup), " walk=", len(x_walk))
    
    x_bed = torch.from_numpy(x_bed)
    x_fall = torch.from_numpy(x_fall)
    x_pickup = torch.from_numpy(x_pickup)
    x_run = torch.from_numpy(x_run)
    x_sitdown = torch.from_numpy(x_sitdown)
    x_standup = torch.from_numpy(x_standup)
    x_walk = torch.from_numpy(x_walk)
    
    
    y_bed  = torch.from_numpy(y_bed)
    y_fall = torch.from_numpy(y_fall)
    y_pickup= torch.from_numpy(y_pickup)
    y_run= torch.from_numpy(y_run)
    y_sitdown= torch.from_numpy(y_sitdown)
    y_standup= torch.from_numpy(y_standup)
    y_walk= torch.from_numpy(y_walk)
    
    return  x_bed, x_fall, x_pickup, x_run, x_sitdown, x_standup, x_walk, \
    y_bed, y_fall, y_pickup, y_run, y_sitdown, y_standup, y_walk 




def  getData():  
    #将原始数据集分成K个子集,对于每个子集i,将其作为验证集,其余的K-1个子集作为训练集。
    kk = 5
    beg_time = datetime.now()
    print("\n getData start")
    # 实例化数据集
    dataset = MyCustomDataset()
    # 定义训练集和测试集的大小比例,例如70%训练集,30%测试集
    train_size = int(len(dataset) * 0.8)
    test_size =  len(dataset) - train_size
    
    train_dataset, test_dataset = random_split(dataset, [train_size, test_size])
    total_time = datetime.now()-beg_time
    print("\n getData 耗时 ",total_time.seconds,"秒")
 
    return train_dataset,test_dataset

   


三 训练部分

window_size = 500
threshold = 60

# Parameters
learning_rate = 0.0001
training_iters = 2000
batch_size = 200
display_step = 100

# Network Parameters
n_input = 90 # WiFi activity data input (img shape: 90*window_size)
n_steps = window_size # timesteps
n_hidden = 200 # hidden layer num of features original 200
n_classes = 7 # WiFi activity total classes

# -*- coding: utf-8 -*-
"""
Created on Thu Jul 25 09:58:29 2024

@author: chengxf2
"""
import torch
from torch import nn
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader
import torch.optim as optim 
import torch.onnx
import netron
from  dataSample import  getData 
from datetime import datetime

def draw_train(train_acc,validation_acc, train_loss,validation_loss):
    print(train_acc)
    print(type(train_acc))
    #Save the Accuracy curve
    plt.subplot(2,1,1 )
    plt.plot(train_acc)
    plt.plot(validation_acc)
    plt.xlabel("n_epoch")
    plt.ylabel("Accuracy")
    plt.legend(["train_acc","validation_acc"],loc=4)
    plt.ylim([0,1])

    #Save the Loss curve
    plt.subplot(2,1,2 )
    plt.plot(train_loss)
    plt.plot(validation_loss)
    plt.xlabel("n_epoch")
    plt.ylabel("Loss")
    plt.legend(["train_loss","validation_loss"],loc=1)
    plt.ylim([0,2])
    plt.show()

class RNN(nn.Module):
    
    def __init__(self, input_len, hidden_len,label_len=8):
      
      super(RNN, self).__init__()
      
      self.lstm = nn.LSTM(input_size= input_len, hidden_size =hidden_len)
      self.layer =  nn.Linear(in_features=hidden_len, out_features=label_len)

    def  forward(self, X):
       #x:[seq,batch, input_size]
       #h/c [num_layer, batch,hidden_size]
       #out[seq,batch, hidden_size]
       out,(ht,ct)= self.lstm(X)
       
       
       
       out = self.layer(ht)[0]
       #pred = F.softmax(out)
       #print(pred.shape)
       return out
   
def getacc(target, label):
    
    _, predY = target.max(dim=1)
    #print(predY)
    diff_count= torch.count_nonzero(predY-label)
    
    total_count = len(label)
    same_count=   len(label)-diff_count
    acc =same_count/total_count

    return acc,same_count,total_count
   
def main():
    #Initialization
    
    print("initialization paramter")
    train_loss = []
    train_acc = []
    validation_loss = []
    validation_acc = []

     # Parameters
    learning_rate = 0.0001
    maxIter = 2000
    batch_size = 200
    display_step = 5
    
    # Network Parameters
    n_steps  =500# timesteps,window_size
    input_size = 90 # WiFi activity data input (img shape: 90*window_size)
    hidden_size = 200 # hidden layer num of features original 200
    n_classes = 8 # WiFi activity total classes
    device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')  
    model = RNN(input_size, hidden_size,n_classes).to(device)
 
    #X = torch.randn(n_steps, batch_size,input_size)
  
    
    train_dataset, test_dataset = getData()
    
    # 创建数据加载器
    train_loader = DataLoader(train_dataset,  batch_size=batch_size, shuffle=True)
    test_loader =  DataLoader(test_dataset,   batch_size=batch_size, shuffle=False)

    print("\n --download finsh v1---")

    # DataLoader迭代产生训练数据提供给模型 

    criterion = nn.CrossEntropyLoss()  #交叉熵
    optimizer = optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999)) #反向传播
    
    start_time = datetime.now()
    bestacc = 0.0
    total_correct =0.0
    total_count = 0.0
    
    for epoch in range(maxIter):
       #训练模式
       model.train()
       print("\n step1 train:",epoch)
       for batchindex,(X,label) in enumerate(train_loader):
          

          #X[batch, seq, input_dim]=>[seq, batch, input_dim]
          X = torch.transpose(X, 0, 1)
          X = X.to(device)
          Y = label.to(device)
          
          predY = model(X)
          loss_train = criterion(predY, Y)
            
          #反向传播
          optimizer.zero_grad()
          loss_train.backward()
          optimizer.step() #更新梯度
          acc_train,same_count, total_count = getacc(predY, Y)
          
          if batchindex%display_step == 0:
              print('batchindex %d, loss %.3f acc %.3f '%(batchindex, loss_train, acc_train))
        
       train_loss.append(loss_train.item())
       train_acc.append(acc_train.item())
              
     
       #评估模式
       print("\n step2: validate:")
       
       model.eval()
       acc_val= 0
       total_val = 0
       total_correct =0
       with torch.no_grad():
           for batchindex,(X,label) in enumerate(test_loader):
               X = torch.transpose(X, 0, 1)
               X = X.to(device)
               Y = label.to(device)
               predY = model(X)
               
               loss_val = criterion(predY, Y)
               _,same_count, total_count= getacc(predY, Y)
               total_correct += same_count
               total_val     += total_count
               
           acc_val = total_correct/total_val
           validation_acc.append(acc_val.item())
           validation_loss.append(loss_val.item())
           
           if acc_val>bestacc:
               bestacc = acc_val
               torch.save(model, 'model.pt')
           
           print('\n 验证: epoch: {} ,val_acc: {}  total_num: {}'.format(epoch, acc_val, total_count))
           
       time_interval = datetime.now()-start_time
       print("epoch finshed %d   time :%4.2f"%(epoch, time_interval.seconds))
    print("\n 训练结束 耗时: %4.2f   bestacc: %4.2f"%(time_interval.seconds, bestacc))
    
    draw_train(train_acc,validation_acc, train_loss,validation_loss)
    '''
    # 创建一个输入样本
    x = torch.randn((n_steps,1,input_size),requires_grad=False)
    
    # 导出模型为ONNX格式
    torch.onnx.export(model,               # 模型实例
                  x,                   # 模型输入
                  "model.onnx",  # 导出的ONNX文件名
                  export_params=True,   # 是否导出参数
                  opset_version=10,     # ONNX操作集版本
                  do_constant_folding=True,  # 是否进行常量折叠
                  input_names = ['input'],   # 模型输入的名称
                  output_names = ['output'], # 模型输出的名称
                  dynamic_axes={'input' : {0 : 'batch_size'},    # 动态轴
                                'output' : {0 : 'batch_size'}})
    print('bestacc',bestacc)   
    netron.start('model.onnx')     
    '''
  

if __name__ == "__main__":
     main()


参考:

pytorch-pytorch之LSTM_pytorch nn.lstm-CSDN博客

  • 15
    点赞
  • 31
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
根据引用\[1\]和引用\[2\]的内容,你可以使用以下命令在虚拟环境中安装PyTorch和相关库: ``` mamba install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia ``` 这个命令会安装PyTorch、torchvision和torchaudio,并指定使用CUDA 11.7版本。同时,它会从pytorch和nvidia的频道中获取软件包。 然而,根据引用\[3\]的内容,如果你在指定的镜像源中找不到指定版本的PyTorch,可能会导致安装的是CPU版本而不是GPU版本。为了解决这个问题,你可以尝试使用其他镜像源或者手动指定安装GPU版本的PyTorch。 综上所述,你可以尝试使用以下命令来安装PyTorch和相关库,并指定使用CUDA 11.7版本: ``` mamba install pytorch torchvision pytorch-cuda=11.7 -c pytorch -c nvidia ``` 希望这能帮到你! #### 引用[.reference_title] - *1* [三分钟搞懂最简单的Pytorch安装流程](https://blog.csdn.net/weixin_44261300/article/details/129643480)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [Pytorch与NVIDA驱动控制安装](https://blog.csdn.net/m0_48176714/article/details/129311194)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] - *3* [解决使用conda下载pytorch-gpu版本困难的问题](https://blog.csdn.net/qq_41963301/article/details/131070422)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值