# 基于线性SVM的CIFAR-10图像集分类

CSDN博客：红色石头的专栏

GitHub：RedstoneWill的GitHub

1. 线性支持向量机LSVM

2. 对偶支持向量机DSVM

3. 核支持向量机KSVM

4. 软间隔支持向量机

5. 核逻辑回归KLR

6. 支持向量回归SVR

## 2. 线性分类与得分函数

$s=Wx+b$

$s=Wx$

## 3. 优化策略与损失函数

${s}_{{y}_{i}}\ge {s}_{j}+\mathrm{\Delta }$

${L}_{i}=\sum _{j\ne {y}_{i}}max\left(0,{s}_{j}-{s}_{{y}_{i}}+\mathrm{\Delta }\right)$

${L}_{i}=max\left(0,-1-5+3\right)+max\left(0,4-5+3\right)=0+2=2$

${L}_{i}=\sum _{j\ne {y}_{i}}max\left(0,{s}_{j}-{s}_{{y}_{i}}+\mathrm{\Delta }{\right)}^{2}$

${L}_{i}=\sum _{j\ne {y}_{i}}max\left(0,{s}_{j}-{s}_{{y}_{i}}+1\right)$

SVM中，为了防止模型过拟合，可以使用正则化「Regularization」方法。例如使用L2正则化：

$R\left(W\right)=\sum _{k}\sum _{l}{w}_{k,l}^{2}$

$L=\frac{1}{N}{L}_{i}+\lambda R\left(W\right)$

## 4. 线性SVM实战

CIFAR-10数据集由60000张3×32×32的 RGB 彩色图片构成，共10个分类。50000张训练，10000张测试（交叉验证）。这个数据集最大的特点在于将识别迁移到了普适物体，而且应用于多分类，是非常经典和常用的数据集。

# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()

scores = X.dot(W)
correct_class_score = scores[range(num_train), list(y)].reshape(-1,1) # (N,1)
margin = np.maximum(0, scores - correct_class_score + 1)
margin[range(num_train), list(y)] = 0
loss = np.sum(margin) / num_train + 0.5 * reg * np.sum(W * W)

num_classes = W.shape[1]
inter_mat = np.zeros((num_train, num_classes))
inter_mat[margin > 0] = 1
inter_mat[range(num_train), list(y)] = 0
inter_mat[range(num_train), list(y)] = -np.sum(inter_mat, axis=1)

dW = (X.T).dot(inter_mat)
dW = dW/num_train + reg*W

W -=  learning_rate * dW

learning_rates = [1.4e-7, 1.5e-7, 1.6e-7]
regularization_strengths = [8000.0, 9000.0, 10000.0, 11000.0, 18000.0, 19000.0, 20000.0, 21000.0]

results = {}
best_lr = None
best_reg = None
best_val = -1   # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.

for lr in learning_rates:
for reg in regularization_strengths:
svm = LinearSVM()
loss_history = svm.train(X_train, y_train, learning_rate = lr, reg = reg, num_iters = 2000)
y_train_pred = svm.predict(X_train)
accuracy_train = np.mean(y_train_pred == y_train)
y_val_pred = svm.predict(X_val)
accuracy_val = np.mean(y_val_pred == y_val)
if accuracy_val > best_val:
best_lr = lr
best_reg = reg
best_val = accuracy_val
best_svm = svm
results[(lr, reg)] = accuracy_train, accuracy_val
print('lr: %e reg: %e train accuracy: %f val accuracy: %f' %
(lr, reg, results[(lr, reg)][0], results[(lr, reg)][1]))
print('Best validation accuracy during cross-validation:\nlr = %e, reg = %e, best_val = %f' %
(best_lr, best_reg, best_val))

# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('linear SVM on raw pixels final test set accuracy: %f' % test_accuracy)


linear SVM on raw pixels final test set accuracy: 0.384000

# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)

# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])

## 5. 总结

http://cs231n.github.io/linear-classify/

• 擅长领域：
• 机器学习
• 深度学习