G:\python\python.exe "F:/my_code/spikingjelly-master-20201221/spikingjelly/clock_driven/ann2snn/examples/if_cnn_mnist_work - 副本.py"
All the temp files are saved to ./log-cnn_mnist1622383630.8332608
ann2snn config:
{
'simulation': {
'reset_to_zero': False, 'encoder': {
'possion': False}, 'avg_pool': {
'has_neuron': True}, 'max_pool': {
'if_spatial_avg': False, 'if_wta': False, 'momentum': None}}, 'parser': {
'robust_norm': True}}
Device is cuda
Epoch 0 [1/782] ANN Training Loss:1.615 Accuracy:0.188
Epoch 0 [101/782] ANN Training Loss:0.886 Accuracy:0.680
Epoch 0 [201/782] ANN Training Loss:0.701 Accuracy:0.778
Epoch 0 [301/782] ANN Training Loss:0.621 Accuracy:0.781
Epoch 0 [401/782] ANN Training Loss:0.572 Accuracy:0.785
Epoch 0 [501/782] ANN Training Loss:0.541 Accuracy:0.786
Epoch 0 [601/782] ANN Training Loss:0.516 Accuracy:0.799
Epoch 0 [701/782] ANN Training Loss:0.499 Accuracy:0.785
Epoch 0 [157/157] ANN Validating Loss:0.386 Accuracy:0.791
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 1 [1/782] ANN Training Loss:0.357 Accuracy:0.812
Epoch 1 [101/782] ANN Training Loss:0.377 Accuracy:0.790
Epoch 1 [201/782] ANN Training Loss:0.376 Accuracy:0.793
Epoch 1 [301/782] ANN Training Loss:0.376 Accuracy:0.791
Epoch 1 [401/782] ANN Training Loss:0.376 Accuracy:0.786
Epoch 1 [501/782] ANN Training Loss:0.375 Accuracy:0.794
Epoch 1 [601/782] ANN Training Loss:0.372 Accuracy:0.797
Epoch 1 [701/782] ANN Training Loss:0.371 Accuracy:0.795
Epoch 1 [157/157] ANN Validating Loss:0.403 Accuracy:0.778
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 2 [1/782] ANN Training Loss:0.416 Accuracy:0.766
Epoch 2 [101/782] ANN Training Loss:0.349 Accuracy:0.801
Epoch 2 [201/782] ANN Training Loss:0.354 Accuracy:0.792
Epoch 2 [301/782] ANN Training Loss:0.352 Accuracy:0.800
Epoch 2 [401/782] ANN Training Loss:0.356 Accuracy:0.790
Epoch 2 [501/782] ANN Training Loss:0.358 Accuracy:0.791
Epoch 2 [601/782] ANN Training Loss:0.358 Accuracy:0.796
Epoch 2 [701/782] ANN Training Loss:0.359 Accuracy:0.786
Epoch 2 [157/157] ANN Validating Loss:0.379 Accuracy:0.785
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 3 [1/782] ANN Training Loss:0.232 Accuracy:0.859
Epoch 3 [101/782] ANN Training Loss:0.352 Accuracy:0.794
Epoch 3 [201/782] ANN Training Loss:0.354 Accuracy:0.793
Epoch 3 [301/782] ANN Training Loss:0.359 Accuracy:0.788
Epoch 3 [401/782] ANN Training Loss:0.359 Accuracy:0.790
Epoch 3 [501/782] ANN Training Loss:0.357 Accuracy:0.801
Epoch 3 [601/782] ANN Training Loss:0.356 Accuracy:0.796
Epoch 3 [701/782] ANN Training Loss:0.354 Accuracy:0.800
Epoch 3 [157/157] ANN Validating Loss:0.358 Accuracy:0.794
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 4 [1/782] ANN Training Loss:0.357 Accuracy:0.797
Epoch 4 [101/782] ANN Training Loss:0.349 Accuracy:0.799
Epoch 4 [201/782] ANN Training Loss:0.339 Accuracy:0.808
Epoch 4 [301/782] ANN Training Loss:0.346 Accuracy:0.790
Epoch 4 [401/782] ANN Training Loss:0.349 Accuracy:0.793
Epoch 4 [501/782] ANN Training Loss:0.350 Accuracy:0.795
Epoch 4 [601/782] ANN Training Loss:0.350 Accuracy:0.797
Epoch 4 [701/782] ANN Training Loss:0.349 Accuracy:0.817
Epoch 4 [157/157] ANN Validating Loss:0.080 Accuracy:0.976
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 5 [1/782] ANN Training Loss:0.028 Accuracy:1.000
Epoch 5 [101/782] ANN Training Loss:0.036 Accuracy:0.991
Epoch 5 [201/782] ANN Training Loss:0.036 Accuracy:0.990
Epoch 5 [301/782] ANN Training Loss:0.034 Accuracy:0.992
Epoch 5 [401/782] ANN Training Loss:0.035 Accuracy:0.987
Epoch 5 [501/782] ANN Training Loss:0.036 Accuracy:0.989
Epoch 5 [601/782] ANN Training Loss:0.036 Accuracy:0.988
Epoch 5 [701/782] ANN Training Loss:0.036 Accuracy:0.989
Epoch 5 [157/157] ANN Validating Loss:0.045 Accuracy:0.984
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 6 [1/782] ANN Training Loss:0.033 Accuracy:0.984
Epoch 6 [101/782] ANN Training Loss:0.031 Accuracy:0.991
Epoch 6 [201/782] ANN Training Loss:0.030 Accuracy:0.991
Epoch 6 [301/782] ANN Training Loss:0.030 Accuracy:0.990
Epoch 6 [401/782] ANN Training Loss:0.030 Accuracy:0.990
Epoch 6 [501/782] ANN Training Loss:0.030 Accuracy:0.992
Epoch 6 [601/782] ANN Training Loss:0.029 Accuracy:0.991
Epoch 6 [701/782] ANN Training Loss:0.030 Accuracy:0.990
Epoch 6 [157/157] ANN Validating Loss:0.038 Accuracy:0.987
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 7 [1/782] ANN Training Loss:0.015 Accuracy:1.000
Epoch 7 [101/782] ANN Training Loss:0.026 Accuracy:0.993
Epoch 7 [201/782] ANN Training Loss:0.025 Accuracy:0.994
Epoch 7 [301/782] ANN Training Loss:0.026 Accuracy:0.991
Epoch 7 [401/782] ANN Training Loss:0.027 Accuracy:0.991
Epoch 7 [501/782] ANN Training Loss:0.028 Accuracy:0.989
Epoch 7 [601/782] ANN Training Loss:0.027 Accuracy:0.993
Epoch 7 [701/782] ANN Training Loss:0.028 Accuracy:0.989
Epoch 7 [157/157] ANN Validating Loss:0.049 Accuracy:0.983
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 8 [1/782] ANN Training Loss:0.017 Accuracy:1.000
Epoch 8 [101/782] ANN Training Loss:0.023 Accuracy:0.994
Epoch 8 [201/782] ANN Training Loss:0.024 Accuracy:0.992
Epoch 8 [301/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 8 [401/782] ANN Training Loss:0.025 Accuracy:0.992
Epoch 8 [501/782] ANN Training Loss:0.025 Accuracy:0.994
Epoch 8 [601/782] ANN Training Loss:0.026 Accuracy:0.990
Epoch 8 [701/782] ANN Training Loss:0.025 Accuracy:0.994
Epoch 8 [157/157] ANN Validating Loss:0.135 Accuracy:0.956
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Epoch 9 [1/782] ANN Training Loss:0.009 Accuracy:1.000
Epoch 9 [101/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 9 [201/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 9 [301/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 9 [401/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 9 [501/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 9 [601/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 9 [701/782] ANN Training Loss:0.024 Accuracy:0.993
Epoch 9 [157/157] ANN Validating Loss:0.051 Accuracy:0.982
Save model to: ./log-cnn_mnist1622383630.8332608\cnn_mnist.pkl
Using 100 pictures as norm set
torchvision.datasets.folder.ImageFolder
torch.Size([100, 1, 28, 28])
Load best model for Model:cnn_mnist...
ANN Validating Accuracy:0.982
Save model to: ./log-cnn_mnist1622383630.8332608\parsed_cnn_mnist.pkl
Using robust normalization...
normalize with bias...
Print Parsed ANN model Structure:
Pytorch_Parser(
(network): Sequential(
(0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
(1): ReLU()
(2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
(4): ReLU()
(5): AvgPool2d(kernel_size=2, stride=2, padding=0)
(6): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
(7): ReLU()
(8): AvgPool2d(kernel_size=2, stride=2, padding=0)
(9): Flatten(start_dim=1, end_dim=-1)
(10): Linear(in_features=32, out_features=5, bias=True)
(11): ReLU()
)
)
Save model to: ./log-cnn_mnist1622383630.8332608\normalized_cnn_mnist.pkl
Print Simulated SNN model Structure:
PyTorch_Converter(
(network): Sequential(
(0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
(1): IFNode(
v_threshold=1.0, v_reset=None, detach_reset=False
(surrogate_function): Sigmoid(alpha=1.0, spiking=True)
)
(2): AvgPool2d(kernel_size=2, stride=2, padding=0)
(3): IFNode(
v_threshold=1.0, v_reset=None, detach_reset=False
(surrogate_function): Sigmoid(alpha=1.0, spiking=True)
)
(4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))
(5): IFNode(
v_threshold=1.0, v_reset=None, detach_reset=False
(surrogate_function): Sigmoid(alpha=1.0,
-10dB
最新推荐文章于 2022-10-16 12:20:39 发布