【写给自己】搭建cnn识别IQ路调制信号

CNN实现调制信号识别

利用IQ两路,信号size为2×128,为了符合CNN处理,先变为1×2×128。数据集来自[RML2016.10a],本次主要是对论文代码复现,利用pytorch重新写一遍原文基于tensorflow和keras的网络。

数据集参数

一共220000个调制信号,11种调制方式,每种调制方式有20000个调制信号。信噪比-20db 到18db间隔2db一共20种信噪比,即单种调制方式的的一种信噪比的调制信号1000个

数据集来自文章

Convolutional Radio Modulation Recognition Networks

生成代码为

https://gitcode.net/mirrors/radioML/dataset

需要GNUradio来跑

fD = 1
delays = [0.0, 0.9, 1.7]
mags = [1, 0.8, 0.3]
ntaps = 8
noise_amp = 10**(-snr/10.0)
chan = channels.dynamic_channel_model( 200e3, 0.01, 50, .01, 0.5e3, 8, fD, True, 4, delays, mags, ntaps, noise_amp, 0x1337 )

可以得知该数据集每个符号有8个样点,采样率为200khz

数据处理

data = pd.read_pickle('.\\RML2016.10a\\RML2016.10a_dict.pkl')
snrs, mods = map(lambda j: sorted(list(set(map(lambda x: x[j], data.keys())))), [1,0])
X = []
label = []
for mod in mods:
    for snr in snrs:
        X.append(data[(mod,snr)])
        for i in range(data[(mod, snr)].shape[0]):  label.append((mod,snr))
X = np.vstack(X)    # 把list变成np数据

得到结果为

X = ndarray:{2200002128}
label = {list:220000}[('8PSK',-20),.....]
mods = ['8PSK', 'AM-DSB', 'AM-SSB', 'BPSK', 'CPFSK', 'GFSK', 'PAM4', 'QAM16', 'QAM64', 'QPSK', 'WBFM']
# 随机挑选一部分作为train和test
np.random.seed(2023)
n_examples = X.shape[0]
n_train = int(n_examples * 0.01)
train_idx = np.random.choice(range(0, n_examples), size=n_train, replace=False).tolist()
test_idx = list(set(range(0, n_examples))-set(train_idx))
X_train = torch.tensor(X[train_idx]).view(len(train_idx),1,2,128)
X_test = torch.tensor(X[test_idx]).view(len(test_idx),1,2,128)
Yy_train = []
Yy_test = []
for i in train_idx: Yy_train.append(label[i][0])
for i in test_idx: Yy_test.append(label[i][0])
classes = mods
def to_onehot(yy):
    yy1 = []
    for j in range(len(yy)):
        yy1.append(classes.index(yy[j]))
    yy1 = np.array(yy1)
    yy1 = torch.tensor(yy1)
    yy2 = torch.zeros([len(yy),max(yy1)+1])
    for i in range(len(yy)):
        yy2[i][yy1[i]] = 1
    return yy2
Y_train = to_onehot(Yy_train)
Y_test = to_onehot(Yy_test)

得到结果为

X_train = tensor:(n_examples,1,2,128)
Y_train = tensor:(n_examples,11)tensor([[0,0,0,0,0,1,0,0,0,0,0]])

网络部分

class MyNet(nn.Module):
    def __init__(self):
        super(MyNet, self).__init__()
        self.conv1 = nn.Sequential(
            nn.ZeroPad2d((2,2,0,0))   # 左右上下
        )
        self.conv2 = nn.Sequential(
            nn.Conv2d(
                in_channels=1,
                out_channels=256,
                kernel_size=(1,3),
                stride=1
            ),
            nn.ReLU(),
            nn.Dropout(p=0.5),
            nn.ZeroPad2d((2,2,0,0))
        )
        self.conv3 = nn.Sequential(
            nn.Conv2d(
                in_channels=256,
                out_channels=80,
                kernel_size=(2,3),
                stride=1
            ),
            nn.ReLU(),
            nn.Dropout(p=0.5),
        )
        self.dense1 = nn.Sequential(
            nn.Linear(10560, 256),
            nn.ReLU(),
            nn.Dropout(p=0.5)
        )
        self.dense2 = nn.Sequential(
            nn.Linear(256, len(classes)),
            nn.Softmax(),
        )

    def forward(self, x):
        out = self.conv1(x)
        out = self.conv2(out)
        out = self.conv3(out)
        out = torch.flatten(out)
        out = self.dense1(out)
        out = self.dense2(out)
        out.reshape(len(classes))
        return out

在这里插入图片描述

# Model
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
epochs = 10
cnn_model = MyNet().to(device)


def train(itr):
    running_loss = 0
    cnn_model.train()
    print("start train")
    for x, y in zip(X_train, Y_train):
        x, y = x.to(device), y.to(device)
        y_pred = cnn_model(x)

        loss_func = nn.CrossEntropyLoss()
        optimizer = torch.optim.Adam(cnn_model.parameters(), lr=0.001)

        loss = loss_func(y_pred, y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        with torch.no_grad():
            running_loss += loss.item()

    ep_loss = running_loss / len(X_train)

    print('epoch: ', itr + 1,
          'loss: ', round(ep_loss, 3))

    return ep_loss

train_loss = []
for epoch in range(epochs):
    epoch_loss = train(epoch)
    train_loss.append(epoch_loss)

电脑不好,还没有跑。其中x,y in zip(),如果没有zip,会显示错误unpacked 2

参考博客

https://blog.csdn.net/qq_40919669/article/details/111192353

  • 4
    点赞
  • 57
    收藏
    觉得还不错? 一键收藏
  • 16
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 16
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值