Softmax
•为Softmax分类器实现完全矢量化的损失函数
•实现完全矢量化的解析梯度表达式
•使用数字梯度检查您的实施
•使用验证集调整学习率和正则化强度
•使用SGD优化损失功能
•可视化最终学习的权重
一、原理
通过概率表示每个类别被选中的几率,能够对分类结果进行量化。
1.1 损失函数
1.2 求导
分情况讨论求导得:
二、实现
2.1 预处理
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
输出:
Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)
2.2 softmax实现的两种方式
2.2.1 朴素方法
def softmax_loss_naive(W, X, y, reg):
"""
Softmax loss function, naive implementation (with loops)
Inputs have dimension D, there are C classes, and we operate on minibatches
of N examples.
Inputs:
- W: A numpy array of shape (D, C) containing weights.
- X: A numpy array of shape (N, D) containing a minibatch of data.
- y: A numpy array of shape (N,) containing training labels; y[i] = c means
that X[i] has label c, where 0 <= c < C.
- reg: (float) regularization strength
Returns a tuple of:
- loss as single float
- gradient with respect to weights W; an array of same shape as W
"""
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)
#############################################################################
# TODO: Compute the softmax loss and its gradient using explicit loops. #
# Store the loss in loss and the gradient in dW. If you are not careful #
# here, it is easy to run into numeric instability. Don't forget the #
# regularization! #
#############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
num_train = X.shape[0]
num_class = W.shape[1]
for i in range(num_train):
score = X[i].dot(W)
score -= np.max(score) # 提高计算中的数值稳定性
correct_score = score[y[i]] # 取分类正确的评分值
exp_sum = np.sum(np.exp(score))
loss += np.log(exp_sum) - correct_score
for j in xrange(num_class):
if j == y[i]:
dW[:, j] += np.exp(score[j]) / exp_sum * X[i] - X[i]
else:
dW[:, j] += np.exp(score[j]) / exp_sum * X[i]
loss /= num_train
loss += reg * np.sum(W * W)
dW /= num_train
dW += 2 * reg * W
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
return loss, dW
代码分析:
输入:
-W:包含权重的形状(D,C)的Numpy数组。
-X:包含少量数据的形状(N,D)的numpy数组。
-y:包含训练标签的形状(N,)的小数数组; y [i] = c表示
X [i]的标号为c,其中0 <= c <C。
-reg :正则化强度
返回:以元组形式
- loss
-关于权重W的梯度; 与W形状相同的数组
使用显式循环计算softmax损失及其梯度。 将损耗存储在损耗中,将梯度存储在dW中
2.2.2 向量方法
def softmax_loss_vectorized(W, X, y, reg):
"""
Softmax loss function, vectorized version.
Inputs and outputs are the same as softmax_loss_naive.
"""
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)
#############################################################################
# TODO: Compute the softmax loss and its gradient using no explicit loops. #
# Store the loss in loss and the gradient in dW. If you are not careful #
# here, it is easy to run into numeric instability. Don't forget the #
# regularization! #
#############################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
num_train = X.shape[0]
score = X.dot(W)
# axis = 1每一行的最大值,score仍为500*10
score -= np.max(score,axis=1)[:,np.newaxis]
# correct_score变为500 * 1
correct_score = score[range(num_train), y]
exp_score = np.exp(score)
# sum_exp_score维度为500*1
sum_exp_score = np.sum(exp_score,axis=1)
# 计算loss
loss = np.sum(np.log(sum_exp_score) - correct_score)
loss /= num_train
loss += reg * np.sum(W * W)
# 计算梯度
margin = np.exp(score) / sum_exp_score.reshape(num_train,1)
margin[np.arange(num_train), y] += -1
dW = X.T.dot(margin)
dW /= num_train
dW += 2 * reg * W
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
return loss, dW
2.2.3 损失计算(朴素方法)
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
输出:
loss: 2.331209
sanity check: 2.302585
2.2.4 数字梯度检查
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
输出:
numerical: -2.197958 analytic: -2.197959, relative error: 1.835440e-08
numerical: -0.371474 analytic: -0.371474, relative error: 2.418036e-08
numerical: 0.663829 analytic: 0.663829, relative error: 1.963636e-08
numerical: -0.997378 analytic: -0.997378, relative error: 4.757704e-09
numerical: -1.798469 analytic: -1.798470, relative error: 2.347157e-08
numerical: 0.919473 analytic: 0.919473, relative error: 2.329868e-08
numerical: -1.474659 analytic: -1.474659, relative error: 4.436053e-08
numerical: -3.986761 analytic: -3.986761, relative error: 1.034609e-08
numerical: -4.929064 analytic: -4.929064, relative error: 6.675491e-09
numerical: 1.963514 analytic: 1.963514, relative error: 4.131495e-09
numerical: 3.458831 analytic: 3.458831, relative error: 1.796046e-08
numerical: -4.019831 analytic: -4.019831, relative error: 2.218832e-08
numerical: 0.260078 analytic: 0.260078, relative error: 6.823148e-08
numerical: -4.096983 analytic: -4.096983, relative error: 1.406447e-08
numerical: 0.176079 analytic: 0.176079, relative error: 1.447370e-07
numerical: -1.231328 analytic: -1.231328, relative error: 4.951874e-08
numerical: -1.585260 analytic: -1.585260, relative error: 2.863777e-08
numerical: 0.564375 analytic: 0.564375, relative error: 1.277931e-07
numerical: 1.797849 analytic: 1.797849, relative error: 5.506777e-10
numerical: 0.715550 analytic: 0.715550, relative error: 3.893813e-08
2.2.5 朴素与向量方式的对比
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
输出:
naive loss: 2.331209e+00 computed in 0.236862s
vectorized loss: 2.331209e+00 computed in 0.015994s
Loss difference: 0.000000
Gradient difference: 0.000000
2.3 超参数调整
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
# Provided as a reference. You may or may not want to change these hyperparameters
learning_rates = [2e-7,5e-7,7e-7,9e-7]
regularization_strengths = [2.4e4, 2.5e4]
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
iters = 2000
for lr in learning_rates:
for rs in regularization_strengths:
softmax = Softmax()
loss_hist = softmax.train(X_train,y_train,learning_rate=lr,reg=rs,num_iters=iters)
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
y_train_pred = softmax.predict(X_train)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = softmax.predict(X_val)
acc_val = np.mean(y_val == y_val_pred)
results[(lr, rs)] = (acc_train, acc_val)
if best_val < acc_val:
best_val = acc_val
best_softmax = softmax
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
部分输出:
lr 2.000000e-07 reg 2.400000e+04 train accuracy: 0.332612 val accuracy: 0.353000
lr 2.000000e-07 reg 2.500000e+04 train accuracy: 0.330449 val accuracy: 0.344000
lr 5.000000e-07 reg 2.400000e+04 train accuracy: 0.317898 val accuracy: 0.338000
lr 5.000000e-07 reg 2.500000e+04 train accuracy: 0.320429 val accuracy: 0.332000
lr 7.000000e-07 reg 2.400000e+04 train accuracy: 0.320612 val accuracy: 0.340000
lr 7.000000e-07 reg 2.500000e+04 train accuracy: 0.323082 val accuracy: 0.344000
lr 9.000000e-07 reg 2.400000e+04 train accuracy: 0.315408 val accuracy: 0.329000
lr 9.000000e-07 reg 2.500000e+04 train accuracy: 0.324673 val accuracy: 0.328000
**best validation accuracy achieved during cross-validation: 0.353000**
2.4 测试
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
输出:
softmax on raw pixels final test set accuracy: 0.340000
2.5 参数矩阵可视化
三、问题回答
Inline Question 1
提问:Why do we expect our loss to be close to -log(0.1)? Explain briefly.
翻译:为什么我们期望损失接近-log(0.1)? 简要说明。
回答:因为我们的权重矩阵乘以0.001之后导致里面的值都非常小接近于0,所以得到的分值向量里的值也都接近于0。因为w是随机初始化,所以每个类计算的得分都是相同的,经过softmax之后的概率也是一样的,而现在是10分类问题,所以每个类的概率是0.1,经过交叉熵得到的loss就是-log(0.1)。
Inline Question 2
提问:Suppose the overall training loss is defined as the sum of the per-datapoint loss over all training examples. It is possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss.(判断)
翻译:假设总训练损失定义为所有训练示例中每个数据点损失的总和。 可以将新的数据点添加到训练集,这将使SVM损失保持不变,但是对于Softmax分类器损失而言,情况并非如此。
回答:有可能加的数据点对svm来讲比较好辨识,所以取max之后都是0,但是对于softmax而言,总会得到一个概率分布,然后算出交叉熵,换言之,softmax的loss总会加上一个量,即使是一个很小的量。