中国海洋大学24秋《软件工程原理与实践》 实验4:MobileNet & ShuffleNet

代码练习

1. 下载 Indian Pines 数据集
! wget http://www.ehu.eus/ccwintco/uploads/6/67/Indian_pines_corrected.mat
! wget http://www.ehu.eus/ccwintco/uploads/c/c4/Indian_pines_gt.mat
  • Indian Pines 是一个标准的高光谱数据集,广泛用于分类任务的研究。

    在这里插入图片描述

2. 导入必要的库和模块
import numpy as np
import matplotlib.pyplot as plt
import scipy.io as sio
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report, cohen_kappa_score
import spectral
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

网络架构及训练

3. 定义 HybridSN 网络
  • 3D卷积:用于提取光谱和空间信息。
  • 2D卷积:提取更高级的空间特征。
  • 全连接层:用于分类。
    # 网络定义
class HybridSN(nn.Module):
    def __init__(self, class_num=16):
        super(HybridSN, self).__init__()
        
        # 3D Convolutional Layers (for spectral feature extraction)
        self.conv3d_1 = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=(7, 3, 3), stride=1, padding=0)
        self.conv3d_2 = nn.Conv3d(in_channels=8, out_channels=16, kernel_size=(5, 3, 3), stride=1, padding=0)
        self.conv3d_3 = nn.Conv3d(in_channels=16, out_channels=32, kernel_size=(3, 3, 3), stride=1, padding=0)
        
        # 2D Convolutional Layer (for spatial feature extraction)
        self.conv2d = nn.Conv2d(in_channels=32 * 18, out_channels=64, kernel_size=(3, 3), stride=1, padding=0)
        
        # Fully connected layers
        self.fc1 = nn.Linear(64 * 17 * 17, 256)
        self.fc2 = nn.Linear(256, 128)
        self.fc3 = nn.Linear(128, class_num)
        
        # Dropout layer
        self.dropout = nn.Dropout(p=0.4)
        
    def forward(self, x):
        # Input: x is (batch_size, 1, spectral_bands, height, width)
        
        # 3D Convolutions
        x = F.relu(self.conv3d_1(x))  # Output: (batch_size, 8, 24, 23, 23)
        x = F.relu(self.conv3d_2(x))  # Output: (batch_size, 16, 20, 21, 21)
        x = F.relu(self.conv3d_3(x))  # Output: (batch_size, 32, 18, 19, 19)
        
        # Reshape for 2D convolutions: (batch_size, 32*18, 19, 19)
        x = x.view(x.size(0), 32 * 18, 19, 19)
        
        # 2D Convolution
        x = F.relu(self.conv2d(x))  # Output: (batch_size, 64, 17, 17)
        
        # Flatten the feature map for fully connected layers
        x = x.view(x.size(0), -1)  # Output: (batch_size, 64 * 17 * 17 = 18496)
        
        # Fully connected layers with dropout
        x = F.relu(self.fc1(x))
        x = self.dropout(x)
        x = F.relu(self.fc2(x))
        x = self.dropout(x)
        
        # Output layer
        x = self.fc3(x)  # No activation, as this is for logits
        
        return x

运行后,可以使用以下代码测试网络:

x = torch.randn(1, 1, 30, 25, 25)  # 模拟输入
net = HybridSN(class_num=16)
y = net(x)
print(y.shape)  # 输出应为 (1, 16)
  • 图片:展示网络结构输出 y.shape

    height


数据处理及训练集/测试集划分

4. PCA 降维与数据预处理
# 对高光谱数据 X 应用 PCA 变换
def applyPCA(X, numComponents):
    newX = np.reshape(X, (-1, X.shape[2]))
    pca = PCA(n_components=numComponents, whiten=True)
    newX = pca.fit_transform(newX)
    newX = np.reshape(newX, (X.shape[0], X.shape[1], numComponents))
    return newX

# 对单个像素周围提取 patch 时,边缘像素就无法取了,因此,给这部分像素进行 padding 操作
def padWithZeros(X, margin=2):
    newX = np.zeros((X.shape[0] + 2 * margin, X.shape[1] + 2* margin, X.shape[2]))
    x_offset = margin
    y_offset = margin
    newX[x_offset:X.shape[0] + x_offset, y_offset:X.shape[1] + y_offset, :] = X
    return newX

# 在每个像素周围提取 patch ,然后创建成符合 keras 处理的格式
def createImageCubes(X, y, windowSize=5, removeZeroLabels = True):
    # 给 X 做 padding
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值