开集识别,雷达微多普勒特征

论文的框架

包括两个部分:一是基于余弦边界损失的深度判别式表示网络(DDRN),

                          二是基于类包含概率(CIP)的概率判别模型。

其中:

DDRN的作用是通过余弦边界损失函数来学习一个嵌入空间,使得同一类别的样本更加紧密,不同类别的样本更加分散,从而提高特征的判别性。

CIP模型的作用是在嵌入空间中为每个已知类别构建一个反威布尔分布函数,用来估计一个测试样本属于该类别的概率,并通过一个阈值来判断该样本是属于已知类别还是未知类别。

这篇论文在一个由77 GHz FMCW雷达测量的自然步态微多普勒数据集上进行了实验,结果表明了该方法在开放集人类识别任务上的有效性。

论文的模型

这篇文献的主要模型是一个基于概率判别模型(CIP)和深度判别式表示网络(DDRN)的开放集人类识别框架,它是这样构建的:

首先,作者使用一个预训练的ResNet-18网络作为骨干网络,并在其后添加了两个全连接层,形成一个深度判别式表示网络(DDRN)。该网络的目的是将雷达微多普勒特征映射到一个嵌入空间,使得同一类别的样本更加紧密,不同类别的样本更加分散。

然后,作者使用一个余弦边界损失函数(CM loss)来训练DDRN,该损失函数是在SoftMax损失函数的基础上引入了一个余弦边缘参数m,用来增加类间距离和减小类内方差,从而提高特征的判别性。

接着,作者在训练好的嵌入空间中,为每个已知类别构建了一个基于极值理论(EVT)的反威布尔分布函数,用来估计一个测试样本属于该类别的概率,并通过一个阈值来判断该样本是属于已知类别还是未知类别。这个概率模型被称为类包含概率(CIP)模型。

最后,作者使用一个决策函数来对测试样本进行开放集人类识别,即如果测试样本属于某个已知类别的概率大于阈值,则将其分配给该类别;否则,将其标记为未知。

请各位大佬指出要想用pytorch复现这篇论文的话,应该怎么找代码,纯小白~~~

自己的代码

我自己的数据集是用 SDR2400AD 雷达系统完成 10 名测试者行走步态的回波数据采集工作。 每次回波数据采集过程规定只有一名被探测目标,当数据采集开始时被探测目标会从距 离雷达天线 3 米左右处开始径直走向雷达,靠近雷达后再返回,每次采集时间设定为 6 秒,每名测试者的采集次数为 100 次,因此实验采集的总组数为1000组。

然后使用 FMCW 雷达实测人体步态回波数据,对 10 名测试者的步态回波数据进行 预处理,得到相对应的微多普勒时频谱图。

以下代码是resnet18和DDRN的,请问各位大佬,代码逻辑都对吗,要想加入别的模型,应该怎么添加呢?

模型搭建(网络搭建)

​
import torch
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
import urllib3
from tqdm import tqdm
from torchvision.models.resnet import model_urls, resnet18

__all__ = ['ResNet18', 'resnet18', 'BasicBlock', 'DDRN']

url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth'
http = urllib3.PoolManager()
response = http.request('GET', url, preload_content=False)
total_size = int(response.headers.get('content-length', 0))
block_size = 1024
with tqdm(total=total_size, unit='iB', unit_scale=True) as progress_bar:
    with open('resnet18-5c106cde.pth', 'wb') as f:
        for data in response.stream(block_size):
            progress_bar.update(len(data))
            f.write(data)


class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, inplanes, planes, stride=1, downsample=None):
        super(BasicBlock, self).__init__()
        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)
        self.downsample = downsample
        self.stride = stride
        self.fc = nn.Linear(512 * self.expansion, 128)
        self.fc2 = nn.Linear(512 * self.expansion, 20)
        # 添加dropout层
        self.dropout = nn.Dropout(p=0.5)
        self.weight_decay = 1e-4  # 设置L2正则化系数

    def forward(self, x):
        identity = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        # 添加dropout层
        out = self.dropout(out)

        if self.downsample is not None:
            identity = self.downsample(x)

        out += identity
        out = self.relu(out)

        # 添加L2正则化项
        out = out + self.weight_decay * torch.sum(self.conv1.weight ** 2) + self.weight_decay * torch.sum(
            self.conv2.weight ** 2)

        return out


class ResNet18(nn.Module):
    def __init__(self, block=BasicBlock, layers=[2, 2, 2, 2], num_classes=10):
        super(ResNet18, self).__init__()
        self.inplanes = 64
        self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc1 = nn.Linear(512 * 1 * 1, 128)  # 添加全连接层fc1
        self.relu = nn.ReLU(inplace=True)
        self.fc2 = nn.Linear(128, num_classes)

        self.weight_decay = 0.0001

    def _make_layer(self, block, planes, blocks, stride=1):
        downsample = None
        if stride != 1 or self.inplanes != planes:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes),
            )

        layers = [block(self.inplanes, planes, stride, downsample)]
        self.inplanes = planes
        for _ in range(1, blocks):
            layers.append(block(self.inplanes, planes))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = x.view(x.size(0), -1)
        x = self.fc1(x)  # 全连接层fc1
        x = self.relu(x)
        x = self.fc2(x)

        # 添加L2正则化
        x = x + self.weight_decay * torch.sum(self.fc1.weight.data ** 2) + self.weight_decay * torch.sum(
            self.fc2.weight.data ** 2)

        return x


class DDRN(nn.Module):
    def __init__(self, num_classes, resnet_model):
        super(DDRN, self).__init__()
        self.resnet = resnet_model
        self.resnet.fc = nn.Linear(512, 512)  # 修改输出特征维度为512
        self.upsample = nn.Sequential(
            nn.ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)),
            nn.ReLU(inplace=True),
            nn.ConvTranspose2d(32, 10, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)),
            nn.ReLU(inplace=True)
        )
        self.fc1 = nn.Linear(512 * 1 * 1, 128)
        self.fc2 = nn.Linear(128, num_classes)

    def forward(self, x):
        x = self.resnet(x)
        print(x.shape)
        x = self.fc1(x)
        x = self.fc2(x)
        return x


# 加载预训练的ResNet模型
resnet = resnet18(pretrained=True)

# 构建DDRN模型
ddrn = DDRN(num_classes=10, resnet_model=resnet)

# 打印模型结构
print(ddrn)

# 将模型设置为训练模式
ddrn.train()

​

调用搭建的网络训练自己的数据集 

import torch
import random
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import ImageFolder
from net import DDRN
import matplotlib.pyplot as plt
from typing import Any
from torchvision.models import resnet18

# 定义超参数
batch_size = 16
learning_rate_backbone = 0.0001
learning_rate_fc = 0.002
num_classes = 10
num_epochs = 10

# 定义数据增强
train_transforms_list = [
    transforms.Compose([
        transforms.Resize((64, 64)),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
    ]),
    transforms.Compose([
        transforms.Resize((64, 64)),
        transforms.RandomRotation(degrees=15),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
    ]),
    transforms.Compose([
        transforms.Resize((64, 64)),
        transforms.RandomCrop(size=64, padding=4),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
    ]),
]

test_transform = transforms.Compose([
    transforms.Resize((64, 64)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])

# 加载数据集
train_datasets_list = [
    ImageFolder('train', transform=train_transforms_list[0]),
    ImageFolder('train', transform=train_transforms_list[1]),
    ImageFolder('train', transform=train_transforms_list[2])
]

test_dataset = ImageFolder('test', transform=test_transform)

train_loaders_list = [
    DataLoader(train_datasets_list[0], batch_size=batch_size, shuffle=True),
    DataLoader(train_datasets_list[1], batch_size=batch_size, shuffle=True),
    DataLoader(train_datasets_list[2], batch_size=batch_size, shuffle=True)
]

test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)

# 定义模型
model = DDRN(num_classes=num_classes, resnet_model=resnet18(pretrained=True))
criterion = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate_backbone, weight_decay=1e-4)
# 添加学习率调度器
step_size = 5
gamma = 0.9
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=gamma)

# 定义四个列表分别存储4个指标的值
train_losses = []
train_accs = []
test_losses = []
test_accs = []

# 训练模型
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)

for epoch in range(num_epochs):
    train_transform = random.choice(train_transforms_list)
    train_dataset = ImageFolder('train', transform=train_transform)
    train_loader: DataLoader[Any] = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

    model.train()
    train_loss = 0.0
    train_acc = 0.0
    for i, (inputs, labels) in enumerate(train_loader):
        inputs = inputs.to(device)
        labels = labels.to(device)

        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        train_loss += loss.item() * inputs.size(0)
        _, preds = torch.max(outputs, 1)
        train_acc += torch.sum(preds == labels.data)

    train_loss = train_loss / len(train_dataset)
    train_acc = train_acc / len(train_dataset)

    print('Epoch: [{}/{}], Train Loss: {:.4f}, Train Acc: {:.4f}, Train Transform: {}'.format(epoch + 1, num_epochs,
                                                                                              train_loss, train_acc,
                                                                                              train_transform))

    model.eval()
    test_loss = 0.0
    test_acc = 0.0
    for i, (inputs, labels) in enumerate(test_loader):
        inputs = inputs.to(device)
        labels = labels.to(device)

        outputs = model(inputs)
        loss = criterion(outputs, labels)

        test_loss += loss.item() * inputs.size(0)
        _, preds = torch.max(outputs, 1)
        test_acc += torch.sum(preds == labels.data)

    test_loss = test_loss / len(test_dataset)
    test_acc = test_acc / len(test_dataset)

    print('Epoch: [{}/{}], Test Loss: {:.4f}, Test Acc: {:.4f}'.format(epoch + 1, num_epochs, test_loss, test_acc))

    # 更新学习率
    scheduler.step()

    # 将得到的指标添加到对应列表
    train_losses.append(train_loss)
    train_accs.append(train_acc)
    test_losses.append(test_loss)
    test_accs.append(test_acc)

# 绘制精度曲线和loss曲线
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.title('Loss')
plt.plot(train_losses, label='Train Loss')
plt.plot(test_losses, label='Test Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.title('Accuracy')
plt.plot(train_accs, label='Train Acc')
plt.plot(test_accs, label='Test Acc')
plt.legend()
plt.show()

# 保存模型
torch.save(model.state_dict(), 'resnet18_model.pth')


 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值