gan-manifold-reg-master流行正则化gan代码执行

总结:

1调试代码时候有错误可以用debuger模式,一步一步进行调试;

2看错误提示要会看,不一定是最后一个错误行提示的问题;

3版本问题引起的错误很多,一定要仔细看错误提示,如果是缺失某模块的话,一般直接install就行,如果权限问题 加 --user或者前面加sudo。

4尝试print输出某中间结果,查看问题;

5epcho次数适当降低,看看结果即可,否则太大运行时间太长。

6及时加注释,以备下次查阅

 

 

Manifold regularization with GANs for semi-supervised learning
Bruno Lecouat, †Institute for Infocomm Research, A*STAR  bruno_lecouat@i2r.a-star.edu.sg 
Chuan-Sheng Foo†
Institute for Infocomm Research, A*STAR
foocs@i2r.a-star.edu.sg
Houssam Zenati
Institute for Infocomm Research, A*STAR
houssam_zenati@i2r.a-star.edu.sg
Vijay Chandrasekhar
Institute for Infocomm Research, A*STAR
vijay@i2r.a-star.edu.sg
Abstract
Generative Adversarial Networks are powerful generative models that are able to
model the manifold of natural images. We leverage this property to perform manifold
regularization by approximating a variant of the Laplacian norm using a Monte
Carlo approximation that is easily computed with the GAN. When incorporated
into the semi-supervised feature-matching GAN we achieve state-of-the-art results
for GAN-based semi-supervised learning on CIFAR-10 and SVHN benchmarks,
with a method that is significantly easier to implement than competing methods.
We also find that manifold regularization improves the quality of generated images,
and is affected by the quality of the GAN used to approximate the regularizer.

# Manifold regularization with GANs for semi-supervised learning

This is the code we used in our paper
>Manifold regularization with GANs for semi-supervised learning
>Bruno Lecouat*, Chuan Sheng Foo*, Houssam Zenati, Vijay Ramaseshan Chandrasekhar

## Requirements

The repo supports python 3.5 + tensorflow 1.5

## Run the Code

To reproduce our results on SVHN 
```
python train_svhn.py       把data_dir 改一下./data/svhn,然后把svhn_data.py中要下载的文件提前下载一下

filepath, _ = urllib.request.urlretrieve('http://ufldl.stanford.edu/housenumbers/train_32x32.mat', new_data_dir+'/train_32x32.mat', _progress)

filepath, _ = urllib.request.urlretrieve('http://ufldl.stanford.edu/housenumbers/test_32x32.mat', new_data_dir+'/test_32x32.mat', _progress)

放入 data_dir+/svhn 中。即可执行。

```

To reproduce our results on CIFAR-10
```
python train_cifar.py
```

 

File "/home/gis/PycharmProjects/guo/gan-manifold-reg-master/train_cifar.py", line 258, in main
    init_op=op,init_feed_dict=init_feed_dict,max_to_keep=2000)
  File "/home/gis/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 250, in new_func
    return func(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'max_to_keep'
 后来把这一行中max_to_keep=2000去掉,变为:

sv = tf.train.Supervisor(logdir=FLAGS.logdir, global_step=global_epoch, summary_op=None, save_model_secs=0,init_op=op,init_feed_dict=init_feed_dict)  
   然后提示 268行错误 #sv.saver(max_to_keep=2000)  不需要参数什么的 。

改成下面语句:sv.saver.save(sess,FLAGS.logdir, global_step=global_epoch)

重新顺利运行。可能是tensorflow版本问题,高版本没有max_to_keep参数了,saver调用也不一样了。


Parameters:
gpu=0
batch_size=25
data_dir=./data/cifar-10-python/
logdir=./log/cifar
seed=10
labeled=400
learning_rate=0.0003
unl_weight=1.0
lbl_weight=1.0
ma_decay=0.9999
decay_start=1200
epoch=1400
validation=False
clamp=False
abs=False
lmin=1.0
lmax=1.0
nabla=1
gamma=0.001
epsilon=20.0
eta=1.0
freq_print=10000
step_print=50
freq_test=1
freq_save=10
h=False
help=False
helpfull=False
helpshort=False

Loading pickle file: ./data/cifar-10-python/cifar-10-batches-py/data_batch_1
Loading pickle file: ./data/cifar-10-python/cifar-10-batches-py/data_batch_2
Loading pickle file: ./data/cifar-10-python/cifar-10-batches-py/data_batch_3
Loading pickle file: ./data/cifar-10-python/cifar-10-batches-py/data_batch_4
Loading pickle file: ./data/cifar-10-python/cifar-10-batches-py/data_batch_5
Loading pickle file: ./data/cifar-10-python/cifar-10-batches-py/test_batch
seed trainy: [5 9 8 ... 7 5 5]
Data:
train examples 50000, batch 2000, test examples 10000, batch 400
histogram train [5000 5000 5000 5000 5000 5000 5000 5000 5000 5000]
histogram test  [1000 1000 1000 1000 1000 1000 1000 1000 1000 1000]
histogram labeled [400 400 400 400 400 400 400 400 400 400]

WARNING:tensorflow:From /home/gis/PycharmProjects/guo/gan-manifold-reg-master/train_cifar.py:130: calling l2_normalize (from tensorflow.python.ops.nn_impl) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
manifold reg enabled
WARNING:tensorflow:From /home/gis/PycharmProjects/guo/gan-manifold-reg-master/train_cifar.py:257: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.MonitoredTrainingSession
start training
2018-11-26 20:21:21.435681: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-11-26 20:21:21.514700: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-11-26 20:21:21.514954: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties: 
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2018-11-26 20:21:21.514964: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2018-11-26 20:21:21.671343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10414 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)

initialization done
Starting training from epoch :0, step:0 

initialization done
Starting training from epoch :0, step:0 

Epoch 0 | time = 142s | loss gen = 0.3232 | loss lab = 0.4920 | loss unl = 1.7724 | train acc = 0.3468| test acc = 0.4415 | test acc ema = 0.1545
Model saved in file: ./log/cifar/model-0
Epoch 1 | time = 140s | loss gen = 0.3511 | loss lab = 0.4071 | loss unl = 1.1725 | train acc = 0.5810| test acc = 0.4824 | test acc ema = 0.2085
Epoch 2 | time = 140s | loss gen = 0.4669 | loss lab = 0.3669 | loss unl = 0.7721 | train acc = 0.7239| test acc = 0.5422 | test acc ema = 0.2689
Epoch 3 | time = 140s | loss gen = 0.5737 | loss lab = 0.3524 | loss unl = 0.5011 | train acc = 0.8231| test acc = 0.5737 | test acc ema = 0.3767
Epoch 4 | time = 140s | loss gen = 0.6615 | loss lab = 0.3254 | loss unl = 0.3324 | train acc = 0.8842| test acc = 0.5497 | test acc ema = 0.4714
Epoch 5 | time = 140s | loss gen = 0.6970 | loss lab = 0.3160 | loss unl = 0.2437 | train acc = 0.9165| test acc = 0.5509 | test acc ema = 0.5368
Epoch 6 | time = 140s | loss gen = 0.6970 | loss lab = 0.3120 | loss unl = 0.1942 | train acc = 0.9359| test acc = 0.5780 | test acc ema = 0.5769
Epoch 7 | time = 140s | loss gen = 0.7019 | loss lab = 0.3048 | loss unl = 0.1608 | train acc = 0.9484| test acc = 0.5782 | test acc ema = 0.6012
Epoch 8 | time = 140s | loss gen = 0.6961 | loss lab = 0.3001 | loss unl = 0.1384 | train acc = 0.9568| test acc = 0.5663 | test acc ema = 0.6141
Epoch 9 | time = 140s | loss gen = 0.6861 | loss lab = 0.3022 | loss unl = 0.1244 | train acc = 0.9618| test acc = 0.5661 | test acc ema = 0.6152
Epoch 10 | time = 140s | loss gen = 0.6675 | loss lab = 0.3054 | loss unl = 0.1116 | train acc = 0.9672| test acc = 0.5816 | test acc ema = 0.6174
Model saved in file: ./log/cifar/model-10
Epoch 11 | time = 140s | loss gen = 0.6569 | loss lab = 0.3042 | loss unl = 0.0996 | train acc = 0.9720| test acc = 0.5544 | test acc ema = 0.6174
Epoch 12 | time = 140s | loss gen = 0.6452 | loss lab = 0.3058 | loss unl = 0.0930 | train acc = 0.9739| test acc = 0.5887 | test acc ema = 0.6183
Epoch 13 | time = 141s | loss gen = 0.6358 | loss lab = 0.3005 | loss unl = 0.0856 | train acc = 0.9769| test acc = 0.5868 | test acc ema = 0.6221
Epoch 14 | time = 140s | loss gen = 0.6271 | loss lab = 0.2959 | loss unl = 0.0792 | train acc = 0.9791| test acc = 0.6065 | test acc ema = 0.6227
Epoch 15 | time = 140s | loss gen = 0.6050 | loss lab = 0.3015 | loss unl = 0.0745 | train acc = 0.9800| test acc = 0.6078 | test acc ema = 0.6240
Epoch 16 | time = 140s | loss gen = 0.6004 | loss lab = 0.2958 | loss unl = 0.0699 | train acc = 0.9822| test acc = 0.6120 | test acc ema = 0.6248
Epoch 17 | time = 140s | loss gen = 0.5841 | loss lab = 0.2962 | loss unl = 0.0656 | train acc = 0.9836| test acc = 0.6163 | test acc ema = 0.6259
Epoch 18 | time = 140s | loss gen = 0.5772 | loss lab = 0.2951 | loss unl = 0.0629 | train acc = 0.9841| test acc = 0.6096 | test acc ema = 0.6283
Epoch 19 | time = 140s | loss gen = 0.5690 | loss lab = 0.2972 | loss unl = 0.0590 | train acc = 0.9858| test acc = 0.5983 | test acc ema = 0.6323
Epoch 20 | time = 141s | loss gen = 0.5594 | loss lab = 0.2939 | loss unl = 0.0566 | train acc = 0.9869| test acc = 0.6134 | test acc ema = 0.6341
Model saved in file: ./log/cifar/model-20
Epoch 21 | time = 140s | loss gen = 0.5494 | loss lab = 0.2964 | loss unl = 0.0551 | train acc = 0.9873| test acc = 0.6304 | test acc ema = 0.6390
Epoch 22 | time = 140s | loss gen = 0.5432 | loss lab = 0.2916 | loss unl = 0.0506 | train acc = 0.9888| test acc = 0.6328 | test acc ema = 0.6408
Epoch 23 | time = 141s | loss gen = 0.5336 | loss lab = 0.2903 | loss unl = 0.0505 | train acc = 0.9882| test acc = 0.6477 | test acc ema = 0.6440
Epoch 24 | time = 140s | loss gen = 0.5293 | loss lab = 0.2878 | loss unl = 0.0477 | train acc = 0.9897| test acc = 0.6320 | test acc ema = 0.6481
Epoch 25 | time = 141s | loss gen = 0.5247 | loss lab = 0.2852 | loss unl = 0.0446 | train acc = 0.9906| test acc = 0.6187 | test acc ema = 0.6489

## Results

Here is a comparison of different models using standard architectures on several datasets (SVHN and CIFAR-10):

CIFAR(% errors) | 1000 labels| 4000 labels
-- | -- | --
Pi model |5.43 +/- 0.25| 16.55 +/- 0.29
Mean Teacher |21.55 +/- 1.48 | 12.31 +/- 0.28
VAT large | | 14.18
FM  | 21.83 +/- 2.01 | 18.63 +/- 2.32
ALI | 19.98 +/- 0.89 | 17.99 +/- 1.62
Bad GAN  |  | 14.41 +/- 0.30
Ours | **16.37 +/- 0.42**| **14.34 +/- 0.17**

SVHN (% errors)  | 500 labels | 1000 labels
-- | -- | --
Pi model |7.05 +/- 0.30| 5.43 +/- 0.25
Mean Teacher |4.35 +/- 0.50 | 3.95 +/- 0.19
VAT small |  | 5.77
FM  | 18.44 +/- 4.80 | 8.11 +/- 1.30
ALI |  | 7.41 +/- 0.65
Bad GAN  | | 7.42 +/- 0.65
Ours | **5.67 +/- 0.11**| **4.63 +/- 0.11**
 

cifar10_input.py文件中:

http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 文件是否要下载?path/cifar-10-batches-py

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值