- Introduction
If the condition number of the Hessian matrix of the objective at the optimum is low, the problem is said to exhibit pathological curvature, and first-order gradient descent will have trouble making progress.
The amount of curvature, and thus the success of our optimization, is not invariant to reparameterization: there may be multiple equivalent ways of parameterizing the same model, some of which are much easier to optimize than others.
Finding good ways of parameterizing neural networks is thus an important problem in deep learning.
Improving the general optimizability of deep networks is a challenging task, but since many neural architectures share these basic building blocks (neuron), improving these building blocks improves the performance of a very wide range of model architectures and could thus be very useful.
Inspired by batch normalization, but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients.
- Weight Normalization
The author proposes to reparametrize each weight vector w in terms of a parameter vector v and a scalar parameter g and to perform SGD with respect to those parameters instead.
This reparameterization has the effect of fixing the Euclidean norm of the weight vector w: we now have ||w|| = g (the magnitude), independent of the parameters v (the direction).
- Code in Pytorch
torch.nn.utils.weight_norm(module, name='weight', dim=0)
>>> m = weight_norm(nn.Linear(20, 40), name='weight')
>>> m
Linear(in_features=20, out_features=40, bias=True)
>>> m.weight_g.size()
torch.Size([40, 1])
>>> m.weight_v.size()
torch.Size([40, 20])