简单一句话来说,weight decay 是为了防止权值过大或过小~
转载地址:http://visualstudiomagazine.com/Articles/2014/07/01/Weight-Decay-and-Restriction.aspx?Page=2
Understanding Weight Decay
Weight decay is probably best explained using a concrete example. Suppose some weight has an initial value of 3.000. The idea of weight decay is to iteratively reduce the magnitude of the value so that the value doesn't become extremely large or extremely small during training. Suppose the weight decay parameter has value 0.10 (in real life, weight decay parameters are typically much smaller). The weight value will be decreased by 0.10 (10 percent) in each iteration. For example:
3.000 -> 3.000 - (0.10)(3.000) = 3.000 - 0.300 = 2.700 2.700 -> 2.700 - (0.10)(2.700) = 2.700 - 0.270 = 2.430 2.430 -> 2.430 - (0.10)(2.430) = 2.430 - 0.243 = 2.187 and so on
Now, observe that subtracting 0.10 of the current weight value in each iteration is equivalent to multiplying by 0.90 in each iteration:
3.000 -> 3.000 * 0.90 = 2.700 2.700 -> 2.700 * 0.90 = 2.430 2.430 -> 2.430 * 0.90 = 2.187 and so on
The point here is that, when using commercial or open source neural network systems, you have to be careful to distinguish between weight decay parameters that represent an amount to subtract (0.10 in the example above), and those that represent a multiplication factor (0.90 in the example).
At first thought, weight decay has the feel of a hack. But weight decay is based on some solid mathematics. Using some fancy math footwork, it can be shown that if lambda represents a theoretical weight decay, then the amount a weight wt should be adjusted is given by:
wt' = wt * (eta * lambda)
where the meaning of eta can vary from reference to reference. The point here is that the weight decay parameter value can have different meanings in different contexts.