# 《neural network and deep learning》题解——ch03 如何选择神经网络的超参数

http://blog.csdn.net/u011239443/article/details/77748116

# 问题一

self.weights = [w - (eta / len(mini_batch)) * nw for w, nw in
zip(self.weights, nabla_w)]

total_cost中去掉：

cost += 0.5 * (lmbda / len(data)) * sum(np.linalg.norm(w) ** 2 for w in self.weights)

net.SGD(training_data[:1000],30,10,0.5,evaluation_data=validation_data[:100],monitor_evaluation_accuracy=True)

Epoch 30 training complete
Acc on evaluation: 17 / 100

Epoch 30 training complete
Acc on evaluation: 11 / 100

λ = 10.0 时，结果：

Epoch 29 training complete
Acc on evaluation: 11 / 100

λ = 1.0 时，结果：

Epoch 30 training complete
Acc on evaluation: 31 / 100

# 问题二

    def SGD(self, training_data, epochs, mini_batch_size, eta,
lmbda=0.0,
evaluation_data=None,
monitor_evaluation_cost=False,
monitor_evaluation_accuracy=False,
monitor_training_cost=False,
monitor_training_accuray=False,max_try = 100):

cnt 记录不提升的次数，如达到max_try，就退出循环。这里用monitor_evaluation_accuracy举例：

        cnt = 0
for j in xrange(epochs):
......
if monitor_evaluation_accuracy:
acc = self.accuracy(evaluation_data)
evaluation_accurary.append(acc)
if len(evaluation_accurary) > 1 and acc < evaluation_accurary[len(evaluation_accurary)-2]:
cnt += 1
if cnt >= max_try:
break
else:
cnt = 0
print "Acc on evaluation: {} / {}".format(acc, n_data)
......

# 问题三

## 策略与实现

    def SGD(self, training_data, epochs, mini_batch_size, eta,
lmbda=0.0,
evaluation_data=None,
monitor_evaluation_cost=False,
monitor_evaluation_accuracy=False,
monitor_training_cost=False,
monitor_training_accuray=False,min_x = 0.01):

            if monitor_evaluation_accuracy:
acc = self.accuracy(evaluation_data)
evaluation_accurary.append(acc)
if len(evaluation_accurary) > 1 and \
(acc - evaluation_accurary[len(evaluation_accurary)-2])*1.0/(1.0*n_data) < min_x:
break
print "Acc on evaluation: {} / {}".format(acc, n_data)


## 对比

10 回合不提升终止策略：

net.SGD(training_data[:1000],50,10,0.25,5.0,evaluation_data=validation_data[:100],
monitor_evaluation_accuracy=True,max_try=3)

Epoch 32 training complete
Acc on evaluation: 15 / 100

Epoch 3 training complete
Acc on evaluation: 17 / 100

# 问题四

        cnt = 0
del_cnt = 0
for j in xrange(epochs):
......
if monitor_evaluation_accuracy:
acc = self.accuracy(evaluation_data)
evaluation_accurary.append(acc)
if len(evaluation_accurary) > 1 and acc < evaluation_accurary[len(evaluation_accurary)-2]:
cnt += 1
if cnt >= max_try:
del_cnt += 1
if del_cnt >= 7:
break
eta /= 2.0
cnt = 0
else:
cnt = 0
print "Acc on evaluation: {} / {}".format(acc, n_data)

# 问题五

• 使用梯度下降来确定 λ 的障碍在于，

得：
Cλ=ww22n=0$\frac{∂C}{∂λ} = \frac{\sum_ww^2}{2n} = 0$
最优化目标使得 w = 0，但是 w 也是我们原来需要优化的。

• 使用梯度下降来确定 η 的障碍在于，η 的最优解不是一个常数，随着迭代次数的增加，η 的最优解会越来越小。

#### 《neural network and deep learning》题解——ch03 再看手写识别问题题解与源码分析

2017-08-28 15:12:07

#### 《Neural Network and Deep Learning》学习笔记-hyper-parameters

2016-06-07 15:34:20

#### 《neural network and deep learning》题解——ch01 神经网络

2017-06-28 22:11:05

#### Neural Network and Deep Learning神经网络与深度学习中文高清完整版PDF

2016年06月22日 3.92MB 下载

#### Neural Networks and Deep Learning 神经网络和深度学习book

2016-03-14 10:51:20

#### TensorFlow官方教程《Neural Networks and Deep Learning》译（第一章）

2017-07-27 18:16:28

#### Neural Networks and Deep Learning 神经网络和深度学习

2016-09-14 15:37:26

#### Neural Networks and Deep Learning-神经网络与深度学习

2016年10月11日 13.4MB 下载

#### Neural Networks and Deep Learning-神经网络与深度学习（中英文两个版本）

2017年10月21日 15.69MB 下载

#### 神经网络压缩（4） Learning Structured Sparsity in Deep Neural Networks

2017-06-13 20:51:23