运行Hyperspectral-Classification踩坑注意事项

运行Hyperspectral-Classification-master踩坑注意事项

方法一

1.parser.add_argument(‘–dataset’, type=str, default=‘PaviaU’, choices=dataset_names,
help=“Dataset to use.”)
parser.add_argument(‘–model’, type=str, default=‘sharma’,)
305行设置在GPU还是CPU运行
47 49行附近修改数据集和模型
在这两行的代码里,default要分别选择确定的数据集和模型,
2.viz = visdom.Visdom(env=str(DATASET) + ’ ’ + str(MODEL)),DATASET和MODEL可以转化成字符,用str()方法,也可以不变;
3.如果要在GPU上跑代码,则需要在
summary(model.to(hyperparams[‘device’]), input.size()[1:])前面加一行input=torch.FloatTensor(input).cuda(),
并将summary方法改为summary(model.to(‘cuda’), input.size()[1:])
同时把cuda参数设置为0
4.一般新建环境时python=3.9

方法二:

不用修改代码,只需要修改运行的命令:
Then, run the script main.py.

必须要加的参数是数据集和模型,–model SVM --dataset IndianPines:

  • –model to specify the model (e.g. ‘svm’, ‘nn’, ‘hamida’, ‘lee’, ‘chen’, ‘li’),
  • –dataset to specify which dataset to use (e.g. ‘PaviaC’, ‘PaviaU’, ‘IndianPines’, ‘KSC’, ‘Botswana’),
  • the --cuda switch to run the neural nets on GPU. The tool fallbacks on CPU if this switch is not specified.

There are more parameters that can be used to control more finely the behaviour of the tool. See python main.py -h for more information.

Examples:

  • python main.py --model SVM --dataset IndianPines --training_sample 0.3
    This runs a grid search on SVM on the Indian Pines dataset, using 30% of the samples for training and the rest for testing. Results are displayed in the visdom panel.

  • python main.py --model nn --dataset PaviaU --training_sample 0.1 --cuda 0
    This runs on GPU a basic 4-layers fully connected neural network on the Pavia University dataset, using 10% of the samples for training.

  • python main.py --model hamida --dataset PaviaU --training_sample 0.5 --patch_size 7 --epoch 50 --cuda 0
    This runs on GPU the 3D CNN from Hamida et al. on the Pavia University dataset with a patch size of 7, using 50% of the samples for training and optimizing for 50 epochs.
    下面这些参数都是可以在代码把default修改为需要的值或者命令行运行时加上的参数

    parser.add_argument(
    “–dataset”, type=str, default=None, choices=dataset_names, help=“Dataset to use.”
    )
    parser.add_argument(
    “–model”,
    type=str,
    default=None,
    help=“Model to train. Available:\n”
    "SVM (linear), "
    "SVM_grid (grid search on linear, poly and RBF kernels), "
    "baseline (fully connected NN), "
    "hu (1D CNN), "
    "hamida (3D CNN + 1D classifier), "
    "lee (3D FCN), "
    "chen (3D CNN), "
    "li (3D CNN), "
    "he (3D CNN), "
    "luo (3D CNN), "
    “sharma (2D CNN), "
    “boulch (1D semi-supervised CNN), "
    “liu (3D semi-supervised CNN), "
    “mou (1D RNN)”,
    )
    parser.add_argument(
    “–folder”,
    type=str,
    help=“Folder where to store the "
    “datasets (defaults to the current working directory).”,
    default=”./Datasets/”,
    )
    parser.add_argument(
    “–cuda”,
    type=int,
    default=-1,
    help=“Specify CUDA device (defaults to -1, which learns on CPU)”,
    )
    parser.add_argument(”–runs”, type=int, default=1, help=“Number of runs (default: 1)”)
    parser.add_argument(
    “–restore”,
    type=str,
    default=None,
    help=“Weights to use for initialization, e.g. a checkpoint”,
    )

Dataset options

group_dataset = parser.add_argument_group(“Dataset”)
group_dataset.add_argument(
“–training_sample”,
type=float,
default=10,
help=“Percentage of samples to use for training (default: 10%%)”,
)
group_dataset.add_argument(
“–sampling_mode”,
type=str,
help=“Sampling mode” " (random sampling or disjoint, default: random)",
default=“random”,
)
group_dataset.add_argument(
“–train_set”,
type=str,
default=None,
help="Path to the train ground truth (optional, this "
“supersedes the --sampling_mode option)”,
)
group_dataset.add_argument(
“–test_set”,
type=str,
default=None,
help="Path to the test set (optional, by default "
“the test_set is the entire ground truth minus the training)”,
)

Training options

group_train = parser.add_argument_group(“Training”)
group_train.add_argument(
“–epoch”,
type=int,
help=“Training epochs (optional, if” " absent will be set by the model)",
)
group_train.add_argument(
“–patch_size”,
type=int,
help="Size of the spatial neighbourhood (optional, if "
“absent will be set by the model)”,
)
group_train.add_argument(
“–lr”, type=float, help=“Learning rate, set by the model if not specified.”
)
group_train.add_argument(
“–class_balancing”,
action=“store_true”,
help=“Inverse median frequency class balancing (default = False)”,
)
group_train.add_argument(
“–batch_size”,
type=int,
help=“Batch size (optional, if absent will be set by the model”,
)
group_train.add_argument(
“–test_stride”,
type=int,
default=1,
help=“Sliding window step stride during inference (default = 1)”,
)

Data augmentation parameters

group_da = parser.add_argument_group(“Data augmentation”)
group_da.add_argument(
“–flip_augmentation”, action=“store_true”, help=“Random flips (if patch_size > 1)”
)
group_da.add_argument(
“–radiation_augmentation”,
action=“store_true”,
help=“Random radiation noise (illumination)”,
)
group_da.add_argument(
“–mixture_augmentation”, action=“store_true”, help=“Random mixes between spectra”
)

parser.add_argument(
“–with_exploration”, action=“store_true”, help=“See data exploration visualization”
)
parser.add_argument(
“–download”,
type=str,
default=None,
nargs=“+”,
choices=dataset_names,
help=“Download the specified datasets and quits.”,
)

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值