pytorch 高光谱图像分类

pytorch 代码,没有注释,自己理解

mirrors / eecn / Hyperspectral-Classification · GitCodeicon-default.png?t=M85Bhttps://gitcode.net/mirrors/eecn/Hyperspectral-Classification?utm_source=csdn_github_accelerator读我:

DeepHyperX

A Python tool to perform deep learning experiments on various hyperspectral datasets. 

Reference 

This toolbox was used for our review paper in Geoscience and Remote Sensing Magazine : 

N. Audebert, B. Le Saux and S. Lefevre, 
"Deep Learning for Classification of Hyperspectral Data: A Comparative Review," 
in IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 2, pp. 159-173, June 2019.

 N. Audebert, B. Le Saux and S. Lefevre, "Deep Learning for Classification of Hyperspectral Data: A Comparative Review," in IEEE Geoscience and Remote Sensing Magazine, vol. 7, no. 2, pp. 159-173, June 2019.

 Bibtex format :

@article{8738045, author={N. {Audebert} and B. {Le Saux} and S. {Lefèvre}}, journal={IEEE Geoscience and Remote Sensing Magazine}, title={Deep Learning for Classification of Hyperspectral Data: A Comparative Review}, year={2019}, volume={7}, number={2}, pages={159-173}, doi={10.1109/MGRS.2019.2912563}, ISSN={2373-7468}, month={June},} 

Note:

The original code forked from GitLib project Linkicon-default.png?t=M85Bhttps://gitlab.inria.fr/naudeber/DeepHyperX

And there is a repository on GitHub, which maybe is the official project code. DeepHyperXicon-default.png?t=M85Bhttps://github.com/nshaud/DeepHyperX 

This repository will not update in the feature.If you need continuous research, please go to the project DeepHyperXicon-default.png?t=M85Bhttps://github.com/nshaud/DeepHyperX Requirements

This tool is compatible with Python 2.7 and Python 3.5+.

It is based on the PyTorch deep learning and GPU computing framework and use the Visdom visualization server.

Setup 

The easiest way to install this code is to create a Python virtual environment and to install dependencies using: pip install -r requirements.txt 

 Hyperspectral datasets

 Several public hyperspectral datasets are available on the UPV/EHU wiki. Users can download those beforehand or let the tool download them. The default dataset folder is ./Datasets/, although this can be modified at runtime using the --folder arg.

 

At this time, the tool automatically downloads the following public datasets:

  • Pavia University
  • Pavia Center
  • Kennedy Space Center
  • Indian Pines
  • Botswana

The original Data Fusion Contest 2018 hyperspectral dataset(DFC2018_HSI) cannot be obtained now, you can try new IGRSS Data or email me to get the original data.(For research and non commercial purposes,do not spread randomly) 

In report HSIAL there is a  statement :
If you wish to use the data, please be sure to email us and provide your Name, Contact 
information, affiliation (University, research lab etc.), and an acknowledgement that you 
will cite this dataset and its source appropriately, as well as provide an acknowledgement
 to the IEEE GRSS IADF and the Hyperspectral Image Analysis Lab at the University of Houston, in any manuscript(s) resulting from it.

An example dataset folder has the following structure:

Datasets
├── Botswana
│   ├── Botswana_gt.mat
│   └── Botswana.mat
├── DFC2018_HSI
│   ├── 2018_IEEE_GRSS_DFC_GT_TR.tif
│   ├── 2018_IEEE_GRSS_DFC_HSI_TR
│   ├── 2018_IEEE_GRSS_DFC_HSI_TR.HDR
├── IndianPines
│   ├── Indian_pines_corrected.mat
│   ├── Indian_pines_gt.mat
├── KSC
│   ├── KSC_gt.mat
│   └── KSC.mat
├── PaviaC
│   ├── Pavia_gt.mat
│   └── Pavia.mat
└── PaviaU
    ├── PaviaU_gt.mat
    └── PaviaU.mat

 Adding a new dataset

Adding a custom dataset can be done by modifying the custom_datasets.py file. Developers should add a new entry to the CUSTOM_DATASETS_CONFIG variable and define a specific data loader for their use case. 

Models 

Currently, this tool implements several SVM variants from the scikit-learn library and many state-of-the-art deep networks implemented in PyTorch. 

 Adding a new model

Adding a custom deep network can be done by modifying the models.py file. This implies creating a new class for the custom deep network and altering the get_model function. 

Usage 

Start a Visdom server: python -m visdom.server and go to http://localhost:8097 to see the visualizations (or http://localhost:9999 if you use Docker). 

Then, run the script main.py

The most useful arguments are: 

  • --model to specify the model (e.g. 'svm', 'nn', 'hamida', 'lee', 'chen', 'li'),
  • --dataset to specify which dataset to use (e.g. 'PaviaC', 'PaviaU', 'IndianPines', 'KSC', 'Botswana'),
  • the --cuda switch to run the neural nets on GPU. The tool fallbacks on CPU if this switch is not specified.

 

There are more parameters that can be used to control more finely the behaviour of the tool. See python main.py -h for more information. 

Examples: 

 

  • python main.py --model SVM --dataset IndianPines --training_sample 0.3 This runs a grid search on SVM on the Indian Pines dataset, using 30% of the samples for training and the rest for testing. Results are displayed in the visdom panel.
  • python main.py --model nn --dataset PaviaU --training_sample 0.1 --cuda 0 This runs on GPU a basic 4-layers fully connected neural network on the Pavia University dataset, using 10% of the samples for training.
  • python main.py --model hamida --dataset PaviaU --training_sample 0.5 --patch_size 7 --epoch 50 --cuda 0 This runs on GPU the 3D CNN from Hamida et al. on the Pavia University dataset with a patch size of 7, using 50% of the samples for training and optimizing for 50 epochs.

License information 

Code for the DeepHyperX toolbox is dual licensed depending on applications, research or commercial. 

PyTorch 中,可以使用 `torch.utils.data.Dataset` 和 `torch.utils.data.DataLoader` 来加载和处理高光谱图像数据,并进行取块操作。 假设你的高光谱图像数据集是一个 `.npy` 文件,其中包含了所有的高光谱图像数据。首先,你需要自定义一个 `HyperspectralDataset` 类,继承自 `torch.utils.data.Dataset` 类,用于加载和处理数据集。在这个类中,你可以实现 `__getitem__` 方法来获取每个样本的数据和标签,并将其转换为张量。具体实现可以参考下面的代码: ```python import torch from torch.utils.data import Dataset class HyperspectralDataset(Dataset): def __init__(self, data_path, label_path, block_size): self.data = torch.from_numpy(np.load(data_path)).float() self.labels = torch.from_numpy(np.load(label_path)).long() self.block_size = block_size def __getitem__(self, index): x = self.data[index] y = self.labels[index] # randomly sample a block from the hyperspectral image x_block = self.random_crop(x, self.block_size) return x_block, y def __len__(self): return len(self.data) def random_crop(self, x, block_size): _, h, w = x.size() dh, dw = block_size, block_size h1 = np.random.randint(0, h - dh + 1) w1 = np.random.randint(0, w - dw + 1) return x[:, h1:h1+dh, w1:w1+dw] ``` 在上述代码中,`data_path` 和 `label_path` 分别为高光谱图像数据和标签的 `.npy` 文件路径,`block_size` 为取块的大小。在 `__getitem__` 方法中,我们随机采样一个块,并将其返回。 接下来,你可以使用 `torch.utils.data.DataLoader` 类来创建一个数据加载器,用于批量加载数据集。例如: ```python from torch.utils.data import DataLoader hyperspectral_dataset = HyperspectralDataset(data_path, label_path, block_size) hyperspectral_dataloader = DataLoader(dataset=hyperspectral_dataset, batch_size=batch_size, shuffle=True) ``` 其中,`batch_size` 是每个批次的大小,`shuffle=True` 表示每个批次的样本顺序是随机的。 最后,你可以使用 `for` 循环遍历数据加载器,并逐个获取每个批次的数据和标签。例如: ```python for x_batch, y_batch in hyperspectral_dataloader: # do something with x_batch and y_batch ``` 在上述代码中,`x_batch` 的维度为 `(batch_size, num_channels, block_size, block_size)`,`y_batch` 的维度为 `(batch_size,)`。你可以对 `x_batch` 进行进一步处理,例如将其送入模型进行训练或推断。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值