斯坦福cs231n计算机视觉训练营----Image features exercise

################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained classifer in best_svm. You might also want to play          #
# with different numbers of bins in the color histogram. If you are careful    #
# you should be able to get accuracy of near 0.44 on the validation set.       #
################################################################################
from copy import deepcopy
for lr in learning_rates:
    for reg in regularization_strengths:
        svm = LinearSVM()
        loss_hist = svm.train(X_train_feats,y_train,learning_rate=lr,reg=reg,num_iters=3000,verbose=False)
        y_train_pred = svm.predict(X_train_feats)
        train_acc = np.mean(y_train_pred==y_train)
        y_val_pred = svm.predict(X_val_feats)
        val_acc = np.mean(y_val_pred==y_val)
        results[(lr,reg)]=(train_acc,val_acc)
        if val_acc > best_val:
            best_val = val_acc
            best_svm = deepcopy(svm)
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

Inline question 1:

Describe the misclassification results that you see. Do they make sense?

比如说会把鸟预测成船,因为鸟张开翅膀的形状确实像船,make sense。

################################################################################
# TODO: Train a two-layer neural network on image features. You may want to    #
# cross-validate various parameters as in previous sections. Store your best   #
# model in the best_net variable.                                              #
################################################################################
learning_rate = [1, 1e-1]
regularization = [1e-5, 5e-5, 1e-4]
best_acc = -1
for lr in learning_rate:
    for reg in regularization:
        net = TwoLayerNet(input_dim, hidden_dim, num_classes)
        state = net.train(X_train_feats, y_train, X_val_feats, y_val,
                num_iters=2000, batch_size=200,
                learning_rate=lr, learning_rate_decay=0.95,
                reg=reg, verbose=False)
        val_acc = np.mean(net.predict(X_val_feats) == y_val)
        if val_acc > best_acc:
            best_acc = val_acc
            best_net = deepcopy(net)
print('best val acc: {:.3f}'.format(best_acc))
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

 

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值