继续做A1,代码部分参考了:
http://www.cnblogs.com/daihengchen/p/5754383.html
1.SVM
svm.ipynb
前几步与之前的KNN都相同,载入数据然后抽取一部分样图可视化。
之后从50000训练数据中取出1000作为验证集,1000测试集,此外还有一个大小为500的development set用于提高速度。
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print ('Train data shape: ', X_train.shape)
print ('Train labels shape: ', y_train.shape)
print ('Validation data shape: ', X_val.shape)
print ('Validation labels shape: ', y_val.shape)
print ('Test data shape: ', X_test.shape)
print ('Test labels shape: ', y_test.shape)
结果:
Train data shape: (49000, 32, 32, 3)
Train labels shape: (49000,)
Validation data shape: (1000, 32, 32, 3 )
Validation labels shape: (1000,)
Test data shape: (1000, 32, 32, 3)
Test labels shape: ( 1000,)
然后reshape一下下。
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print ('Training data shape: ', X_train.shape)
print ('Validation data shape: ', X_val.shape)
print ('Test data shape: ', X_test.shape)
print ('dev data shape: ', X_dev.shape)
结果:
Training data shape: (49000, 3072)
Validation data shape: (1000, 3072)
Test data shape: (1000, 3072)
dev data shape: (500, 3072)
几个数据集都变成了行向量。
然后取了每幅图的平均,前10张整出来看了一下。
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print (mean_image[:10]) # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
结果:
[ 130.64189796 135.98173469 132.47391837 130.05569388 135.34804082 131.75402041 130.96055102 136.14328571 132.47636735 131.48467347]
然后把各个数据集都减去了图像均值,对数据做了一下预处理。
考虑到课上讲的那个带偏置b的公式,所以把数据集的矩阵都增加了一列1.
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print( X_train.shape, X_val.shape, X_test.shape, X_dev.shape)
结果:
(49000, 3073) (1000, 3073) (1000, 3073) (500, 3073)
接下来真正进入正题,先生成一个随机的W矩阵,计算损失。
SVM的损失函数计算已经写好了,并且加入了L2正则化,还需要手动写一点计算梯度和dW的代码。
import numpy as np
from random import shuffle
def svm_loss_naive(W, X, y, reg):
"""
Structured SVM loss function, naive implementation (with loops).
Inputs have dimension D, there are C classes, and we operate on minibatches
of N examples.
Inputs:
- W: A numpy array of shape (D, C) containing weights.
- X: A numpy array of shape (N, D) containing a minibatch of data.
- y: A numpy array of shape (N,) containing training labels; y[i] = c means
that X[i] has label c, where 0 <= c < C.
- reg: (float) regularization strength
Returns a tuple of:
- loss as single float
- gradient with respect to weights W; an array of same shape as W
"""
dW = np.zeros(W.shape) # initialize the gradient as zero
# compute the loss and the gradient
num_classes = W.shape[1]
num_train = X.shape[0]
loss = 0.0
for i in range(num_train):
scores = X[i].dot(W)
correct_class_score = scores[y[i]]
for j in range(num_classes):
if j == y[i]:
continue
margin = scores[j] - correct_class_score + 1 # note delta = 1
if margin > 0:
loss += margin
dW[:,y[i]] += -X[i,:]
dW[:,j] += X[i,:]
# Right now the loss is a sum over all training examples, but we want it
# to be an average instead so we divide by num_train.
loss /= num_train
# Add regularization to the loss.
loss += 0.5 * reg * np.sum(W * W)
#############################################################################
# TODO: #
# Compute the gradient of the loss function and store it dW. #
# Rather that first computing the loss and then computing the derivative, #
# it may be simpler to compute the derivative at the same time that the #
# loss is being computed. As a result you may need to modify some of the #
# code above to compute the gradient. #
#############################################################################
dW /= num_train
dW += reg * W
return loss, dW
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.00001)
print ('loss: %f' % (loss, ))
结果:
loss: 8.907044
之后做了gradient check,这就用到了之前分出来的大小为500的数据集。
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
print(1)
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
print(2)
loss, grad = svm_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad)
结果:
1 numerical: 14.617711 analytic: 14.617711, relative error: 6.404210e-12 numerical: 0.115072 analytic: 0.115072, relative error: 4.071593e-10 numerical: 13.349968 analytic: 13.349968, relative error: 1.729524e-12 numerical: 15.778984 analytic: 15.778984, relative error: 3.307991e-11 numerical: 9.366425 analytic: 9.366425, relative error: 3.311847e-11 numerical: 4.323910 analytic: 4.323910, relative error: 1.216207e-11 numerical: 23.449507 analytic: 23.449507, relative error: 1.036899e-11 numerical: -1.107864 analytic: -1.107864, relative error: 2.104031e-10 numerical: -1.563012 analytic: -1.563012, relative error: 1.622242e-11 numerical: 26.937009 analytic: 26.937009, relative error: 1.647830e-11 2 numerical: 13.397230 analytic: 13.397230, relative error: 2.003104e-11 numerical: -2.885354 analytic: -2.885354, relative error: 6.992493e-11 numerical: 9.370604 analytic: 9.370604, relative error: 1.147164e-11 numerical: -7.630019 analytic: -7.630019, relative error: 3.550522e-11 numerical: 11.904373 analytic: 11.904373, relative error: 2.424806e-14 numerical: -8.516367 analytic: -8.516367, relative error: 4.743488e-11 numerical: -21.188352 analytic: -21.188352, relative error: 1.446557e-11 numerical: 23.897181 analytic: 23.897181, relative error: 1.202490e-12 numerical: 8.350929 analytic: 8.350929, relative error: 2.168510e-12 numerical: -24.697978 analytic: -24.697978, relative error: 3.098162e-12
. . .
def svm_loss_vectorized(W, X, y, reg):
"""
Structured SVM loss function, vectorized implementation.
Inputs and outputs are the same as svm_loss_naive.
"""
loss = 0.0
dW = np.zeros(W.shape) # initialize the gradient as zero
scores = X.dot(W)
num_train = X.shape[0]
#############################################################################
# TODO: #
# Implement a vectorized version of the structured SVM loss, storing the #
# result in loss. #
#############################################################################
scores_correct = scores[np.arange(num_train),y]
scores_correct = np.reshape(scores_correct,(num_train,-1)) #转置一下
margins = scores - scores_correct + 1
margins = np.maximum(0,margins)
margins[np.arange(num_train),y] = 0
loss += np.sum(margins)/num_train
loss += 0.5 * reg * np.sum(W*W)
#############################################################################
# END OF YOUR CODE #
#############################################################################
#############################################################################
# TODO: #
# Implement a vectorized version of the gradient for the structured SVM #
# loss, storing the result in dW. #
# #
# Hint: Instead of computing the gradient from scratch, it may be easier #
# to reuse some of the intermediate values that you used to compute the #
# loss. #
#############################################################################
margins[margins > 0] = 1
row_sum = np.sum(margins,axis=1)
margins[np.arange(num_train),y] = -row_sum
dW += np.dot(X.T,margins) / num_train + reg * W
#############################################################################
# END OF YOUR CODE #
#############################################################################
return loss, dW
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print ('Naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print ('Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# The losses should match but your vectorized implementation should be much faster.
print ('difference: %f' % (loss_naive - loss_vectorized))
Vectorized loss: 8.907044e+00 computed in 0.015682s
difference: 0.000000
Vectorized loss and gradient: computed in 0.015629s
difference: 0.000000
import numpy as np
from cs231n.classifiers.linear_svm import *
from cs231n.classifiers.softmax import *
class LinearClassifier(object):
def __init__(self):
self.W = None
def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,
batch_size=200, verbose=False):
"""
Train this linear classifier using stochastic gradient descent.
Inputs:
- X: A numpy array of shape (N, D) containing training data; there are N
training samples each of dimension D.
- y: A numpy array of shape (N,) containing training labels; y[i] = c
means that X[i] has label 0 <= c < C for C classes.
- learning_rate: (float) learning rate for optimization.
- reg: (float) regularization strength.
- num_iters: (integer) number of steps to take when optimizing
- batch_size: (integer) number of training examples to use at each step.
- verbose: (boolean) If true, print progress during optimization.
Outputs:
A list containing the value of the loss function at each training iteration.
"""
num_train, dim = X.shape
num_classes = np.max(y) + 1 # assume y takes values 0...K-1 where K is number of classes
if self.W is None:
# lazily initialize W
self.W = 0.001 * np.random.randn(dim, num_classes)
# Run stochastic gradient descent to optimize W
loss_history = []
for it in range(num_iters):
X_batch = None
y_batch = None
#########################################################################
# TODO: #
# Sample batch_size elements from the training data and their #
# corresponding labels to use in this round of gradient descent. #
# Store the data in X_batch and their corresponding labels in #
# y_batch; after sampling X_batch should have shape (dim, batch_size) #
# and y_batch should have shape (batch_size,) #
# #
# Hint: Use np.random.choice to generate indices. Sampling with #
# replacement is faster than sampling without replacement. #
#########################################################################
select_idxs = np.random.choice(num_train,batch_size,replace=False)
X_batch = X[select_idxs,:]
y_batch = y[select_idxs]
#########################################################################
# END OF YOUR CODE #
#########################################################################
# evaluate loss and gradient
loss, grad = self.loss(X_batch, y_batch, reg)
loss_history.append(loss)
# perform parameter update
#########################################################################
# TODO: #
# Update the weights using the gradient and the learning rate. #
#########################################################################
step = learning_rate * grad
self.W += -step
#########################################################################
# END OF YOUR CODE #
#########################################################################
if verbose and it % 100 == 0:
print ('iteration %d / %d: loss %f' % (it, num_iters, loss))
return loss_history
def predict(self, X):
"""
Use the trained weights of this linear classifier to predict labels for
data points.
Inputs:
- X: D x N array of training data. Each column is a D-dimensional point.
Returns:
- y_pred: Predicted labels for the data in X. y_pred is a 1-dimensional
array of length N, and each element is an integer giving the predicted
class.
"""
y_pred = np.zeros(X.shape[1])
###########################################################################
# TODO: #
# Implement this method. Store the predicted labels in y_pred. #
###########################################################################
scores = np.dot(X,self.W)
y_pred = np.argmax(scores, axis=1)
###########################################################################
# END OF YOUR CODE #
###########################################################################
return y_pred
def loss(self, X_batch, y_batch, reg):
"""
Compute the loss function and its derivative.
Subclasses will override this.
Inputs:
- X_batch: A numpy array of shape (N, D) containing a minibatch of N
data points; each point has dimension D.
- y_batch: A numpy array of shape (N,) containing labels for the minibatch.
- reg: (float) regularization strength.
Returns: A tuple containing:
- loss as a single float
- gradient with respect to self.W; an array of the same shape as W
"""
pass
class LinearSVM(LinearClassifier):
""" A subclass that uses the Multiclass SVM loss function """
def loss(self, X_batch, y_batch, reg):
return svm_loss_vectorized(self.W, X_batch, y_batch, reg)
class Softmax(LinearClassifier):
""" A subclass that uses the Softmax + Cross-entropy loss function """
def loss(self, X_batch, y_batch, reg):
return softmax_loss_vectorized(self.W, X_batch, y_batch, reg)
通过1500次迭代,计算损失
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
#from cs231n.classifiers.linear_classifier import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=5e4,
num_iters=1500, verbose=True)
toc = time.time()
print ('That took %fs' % (toc - tic))
结果:
iteration 100 / 1500: loss 291.720273
iteration 200 / 1500: loss 110.101555
iteration 300 / 1500: loss 43.277280
iteration 400 / 1500: loss 19.120079
iteration 500 / 1500: loss 10.168634
iteration 600 / 1500: loss 7.524203
iteration 700 / 1500: loss 5.793495
iteration 800 / 1500: loss 5.616953
iteration 900 / 1500: loss 5.363792
iteration 1000 / 1500: loss 5.512803
iteration 1100 / 1500: loss 5.209704
iteration 1200 / 1500: loss 5.296725
iteration 1300 / 1500: loss 5.505181
iteration 1400 / 1500: loss 5.340960
That took 8.797818s
validation accuracy: 0.376000
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-7, 5e-5]
regularization_strengths = [5e4, 1e5]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
hypara = [(x,y) for x in learning_rates for y in regularization_strengths]
for Lrate,regS in hypara:
svm = LinearSVM()
loss_hist = svm.train(X_train,y_train,Lrate,regS,
900,verbose=False)
y_train_pred = svm.predict(X_train)
accuracy_train = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val)
accuracy_val = np.mean(y_val == y_val_pred)
results[(Lrate,regS)] = (accuracy_train , accuracy_val)
if(best_val < accuracy_val):
best_val = accuracy_val
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print ('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print ('best validation accuracy achieved during cross-validation: %f' % best_val)
结果:
lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.370224 val accuracy: 0.389000 lr 1.000000e-07 reg 1.000000e+05 train accuracy: 0.357714 val accuracy: 0.372000 lr 5.000000e-05 reg 5.000000e+04 train accuracy: 0.053286 val accuracy: 0.055000 lr 5.000000e-05 reg 1.000000e+05 train accuracy: 0.100265 val accuracy: 0.087000 best validation accuracy achieved during cross-validation: 0.389000