CS231n Assignment1--Q3

Q3: Implement a Softmax classifier

作业代码已上传至我github: https://github.com/jingshuangliu22/cs231n,欢迎参考、讨论、指正。

softmax.ipynb

Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)

Softmax Classifier

loss: 2.374488
sanity check: 2.302585

numerical: 1.055128 analytic: 1.055127, relative error: 8.054454e-08
numerical: 1.799979 analytic: 1.799979, relative error: 3.548119e-08
numerical: 3.057507 analytic: 3.057507, relative error: 4.628624e-09
numerical: 3.042005 analytic: 3.042005, relative error: 1.489120e-08
numerical: 3.134774 analytic: 3.134774, relative error: 1.959762e-08
numerical: 3.469694 analytic: 3.469694, relative error: 1.521055e-09
numerical: -0.821283 analytic: -0.821283, relative error: 4.904056e-09
numerical: 0.033736 analytic: 0.033736, relative error: 2.233549e-07
numerical: -0.249902 analytic: -0.249903, relative error: 1.138011e-07
numerical: 0.259507 analytic: 0.259507, relative error: 5.076593e-08
numerical: -4.654762 analytic: -468.625947, relative error: 9.803298e-01
numerical: -3.464832 analytic: -349.169539, relative error: 9.803489e-01
numerical: 0.225706 analytic: 22.937998, relative error: 9.805121e-01
numerical: 0.458898 analytic: 45.620289, relative error: 9.800822e-01
numerical: -2.204595 analytic: -221.769293, relative error: 9.803138e-01
numerical: -0.125210 analytic: -13.301957, relative error: 9.813498e-01
numerical: 2.005059 analytic: 201.274224, relative error: 9.802729e-01
numerical: 1.269539 analytic: 129.611320, relative error: 9.806001e-01
numerical: -0.056690 analytic: -6.611941, relative error: 9.829980e-01
numerical: -0.262966 analytic: -27.175883, relative error: 9.808326e-01

naive loss: 2.374488e+00 computed in 0.211975s
vectorized loss: 2.374488e+00 computed in 0.009088s
Loss difference: 0.000000
Gradient difference: 0.000000

iteration 0 / 1500: loss 782.193431
iteration 100 / 1500: loss 287.369345
iteration 200 / 1500: loss 106.520535
iteration 300 / 1500: loss 40.317201
iteration 400 / 1500: loss 16.090033
iteration 500 / 1500: loss 7.234079
iteration 600 / 1500: loss 3.996430
iteration 700 / 1500: loss 2.795388
iteration 800 / 1500: loss 2.292711
iteration 900 / 1500: loss 2.170212
iteration 1000 / 1500: loss 2.175318
iteration 1100 / 1500: loss 2.168804
iteration 1200 / 1500: loss 2.132038
iteration 1300 / 1500: loss 2.146590
iteration 1400 / 1500: loss 2.076369
iteration 0 / 1500: loss 1533014.675630
iteration 100 / 1500: loss nan
iteration 200 / 1500: loss nan
iteration 300 / 1500: loss nan
iteration 400 / 1500: loss nan
iteration 500 / 1500: loss nan
iteration 600 / 1500: loss nan
iteration 700 / 1500: loss nan
iteration 800 / 1500: loss nan
iteration 900 / 1500: loss nan
iteration 1000 / 1500: loss nan
iteration 1100 / 1500: loss nan
iteration 1200 / 1500: loss nan
iteration 1300 / 1500: loss nan
iteration 1400 / 1500: loss nan
iteration 0 / 1500: loss 768.536932
iteration 100 / 1500: loss 6.860303
iteration 200 / 1500: loss 2.084644
iteration 300 / 1500: loss 2.097076
iteration 400 / 1500: loss 2.101419
iteration 500 / 1500: loss 2.098051
iteration 600 / 1500: loss 2.126691
iteration 700 / 1500: loss 2.055123
iteration 800 / 1500: loss 2.072236
iteration 900 / 1500: loss 2.111768
iteration 1000 / 1500: loss 2.136510
iteration 1100 / 1500: loss 2.105484
iteration 1200 / 1500: loss 2.069053
iteration 1300 / 1500: loss 2.135619
iteration 1400 / 1500: loss 2.033488
iteration 0 / 1500: loss 1550818.358399
iteration 100 / 1500: loss nan
iteration 200 / 1500: loss nan
iteration 300 / 1500: loss nan
iteration 400 / 1500: loss nan
iteration 500 / 1500: loss nan
iteration 600 / 1500: loss nan
iteration 700 / 1500: loss nan
iteration 800 / 1500: loss nan
iteration 900 / 1500: loss nan
iteration 1000 / 1500: loss nan
iteration 1100 / 1500: loss nan
iteration 1200 / 1500: loss nan
iteration 1300 / 1500: loss nan
iteration 1400 / 1500: loss nan
lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.327531 val accuracy: 0.353000
lr 1.000000e-07 reg 1.000000e+08 train accuracy: 0.100265 val accuracy: 0.087000
lr 5.000000e-07 reg 5.000000e+04 train accuracy: 0.330939 val accuracy: 0.331000
lr 5.000000e-07 reg 1.000000e+08 train accuracy: 0.100265 val accuracy: 0.087000
best validation accuracy achieved during cross-validation: 0.353000

softmax on raw pixels final test set accuracy: 0.340000

这里写图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值