softmax是deep learning常用的一个操作,虽然有很多现成的包可以调,但在某些场景下需要自己实现。本文简单探讨一下softmax可能会出现的数值稳定性问题
- 解决上溢出问题
Softmax ( x i ) = exp ( x i ) ∑ j = 1 N exp ( x j ) = exp ( x i ) / exp ( x m a x ) ∑ j = 1 N exp ( x j ) / exp ( x m a x ) = exp ( x i − x m a x ) ∑ j = 1 N exp ( x j − x m a x ) (1) \begin{aligned} \text{Softmax}(x_{i}) &= \frac{\exp(x_i) }{ \sum_{j=1}^{N} \exp(x_j)} \\ &= \frac{\exp(x_i) / \exp{(x_{max})}}{ \sum_{j=1}^{N} \exp(x_j) / \exp{(x_{max})} } \\ &= \frac{\exp(x_i - x_{max})}{ \sum_{j=1}^{N} \exp(x_j - x_{max})} \end{aligned} \tag{1} Softmax(xi)=∑j=1Nexp(xj)exp(xi)=∑j=1Nexp(xj)/exp(xmax)exp(xi)/exp(xmax)=∑j=1Nexp(xj−xmax)exp(xi−xmax)(1)
-
当 x m a x x_{max} xmax很大时,分子可能出现 0 0 0,当和 log \log log联用时(如计算cross-entropy损失),会出现 l o g ( 0 ) log(0) log(0),此时应当进行如下变形。
log s o f t m a x ( x i ) = log ( exp ( x i − x m a x ) ∑ j = 1 N exp ( x j − x m a x ) ) = log exp ( x i − x m a x ) − log ∑ j = 1 N exp ( x j − x m a x ) = ( x i − x m a x ) − log ∑ j = 1 N exp ( x j − x m a x ) ⏟ > 1 (2) \begin{aligned} \log \mathrm{softmax}(x_i) &= \log \Bigr( {\frac{\exp(x_i - x_{max})}{ \sum_{j=1}^{N} \exp(x_j - x_{max})}} \Bigr) \\ & = \log \exp(x_i - x_{max}) - \log { \sum_{j=1}^{N} \exp(x_j - x_{max}) } \\ & = (x_i - x_{max}) - \log { \underbrace{\sum_{j=1}^{N} \exp(x_j - x_{max}) }_{\gt 1} } \end{aligned} \tag{2} logsoftmax(xi)=log(∑j=1Nexp(xj−xmax)exp(xi−xmax))=logexp(xi−xmax)−logj=1∑Nexp(xj−xmax)=(xi−xmax)−log>1 j=1∑Nexp(xj−xmax)(2)