# Deep learning------------softmax regression

Softmax回归 Softmax Regression 有监督学习 supervised learning 无监督学习 unsupervised learning 深度学习 deep learning logistic回归 logistic regression 截距项 intercept term 二元分类 binary classification 类型标记 class labels 估值函数/估计值 hypothesis 代价函数 cost function 多元分类 multi-class classification 权重衰减 weight decay

\begin{align}h_\theta(x) = \frac{1}{1+\exp(-\theta^Tx)},\end{align}

\begin{align}J(\theta) = -\frac{1}{m} \left[ \sum_{i=1}^m y^{(i)} \log h_\theta(x^{(i)}) + (1-y^{(i)}) \log (1-h_\theta(x^{(i)})) \right]\end{align}

\begin{align}h_\theta(x^{(i)}) =\begin{bmatrix}p(y^{(i)} = 1 | x^{(i)}; \theta) \\p(y^{(i)} = 2 | x^{(i)}; \theta) \\\vdots \\p(y^{(i)} = k | x^{(i)}; \theta)\end{bmatrix}=\frac{1}{ \sum_{j=1}^{k}{e^{ \theta_j^T x^{(i)} }} }\begin{bmatrix}e^{ \theta_1^T x^{(i)} } \\e^{ \theta_2^T x^{(i)} } \\\vdots \\e^{ \theta_k^T x^{(i)} } \\\end{bmatrix}\end{align}

$\theta = \begin{bmatrix}\mbox{---} \theta_1^T \mbox{---} \\\mbox{---} \theta_2^T \mbox{---} \\\vdots \\\mbox{---} \theta_k^T \mbox{---} \\\end{bmatrix}$

## 代价函数

 值为真的表达式

， $\textstyle 1\{$ 值为假的表达式 $\textstyle \}=0$。举例来说，表达式 $\textstyle 1\{2+2=4\}$ 的值为1 ，$\textstyle 1\{1+1=5\}$的值为 0。我们的代价函数为：

\begin{align}J(\theta) = - \frac{1}{m} \left[ \sum_{i=1}^{m} \sum_{j=1}^{k} 1\left\{y^{(i)} = j\right\} \log \frac{e^{\theta_j^T x^{(i)}}}{\sum_{l=1}^k e^{ \theta_l^T x^{(i)} }}\right]\end{align}

\begin{align}J(\theta) &= -\frac{1}{m} \left[ \sum_{i=1}^m (1-y^{(i)}) \log (1-h_\theta(x^{(i)})) + y^{(i)} \log h_\theta(x^{(i)}) \right] \\&= - \frac{1}{m} \left[ \sum_{i=1}^{m} \sum_{j=0}^{1} 1\left\{y^{(i)} = j\right\} \log p(y^{(i)} = j | x^{(i)} ; \theta) \right]\end{align}

$p(y^{(i)} = j | x^{(i)} ; \theta) = \frac{e^{\theta_j^T x^{(i)}}}{\sum_{l=1}^k e^{ \theta_l^T x^{(i)}} }$.

\begin{align}\nabla_{\theta_j} J(\theta) = - \frac{1}{m} \sum_{i=1}^{m}{ \left[ x^{(i)} \left( 1\{ y^{(i)} = j\} - p(y^{(i)} = j | x^{(i)}; \theta) \right) \right] }\end{align}

## Softmax回归模型参数化的特点

Softmax 回归有一个不寻常的特点：它有一个“冗余”的参数集。为了便于阐述这一特点，假设我们从参数向量 $\textstyle \theta_j$ 中减去了向量 $\textstyle \psi$，这时，每一个$\textstyle \theta_j$ 都变成了 $\textstyle \theta_j - \psi$($\textstyle j=1, \ldots, k$)。此时假设函数变成了以下的式子：

\begin{align}p(y^{(i)} = j | x^{(i)} ; \theta)&= \frac{e^{(\theta_j-\psi)^T x^{(i)}}}{\sum_{l=1}^k e^{ (\theta_l-\psi)^T x^{(i)}}} \\&= \frac{e^{\theta_j^T x^{(i)}} e^{-\psi^Tx^{(i)}}}{\sum_{l=1}^k e^{\theta_l^T x^{(i)}} e^{-\psi^Tx^{(i)}}} \\&= \frac{e^{\theta_j^T x^{(i)}}}{\sum_{l=1}^k e^{ \theta_l^T x^{(i)}}}.\end{align}

## 权重衰减

\begin{align}J(\theta) = - \frac{1}{m} \left[ \sum_{i=1}^{m} \sum_{j=1}^{k} 1\left\{y^{(i)} = j\right\} \log \frac{e^{\theta_j^T x^{(i)}}}{\sum_{l=1}^k e^{ \theta_l^T x^{(i)} }} \right] + \frac{\lambda}{2} \sum_{i=1}^k \sum_{j=0}^n \theta_{ij}^2\end{align}

\begin{align}\nabla_{\theta_j} J(\theta) = - \frac{1}{m} \sum_{i=1}^{m}{ \left[ x^{(i)} ( 1\{ y^{(i)} = j\} - p(y^{(i)} = j | x^{(i)}; \theta) ) \right] } + \lambda \theta_j\end{align}

## Softmax回归与Logistic 回归的关系

\begin{align}h_\theta(x) &=\frac{1}{ e^{\theta_1^Tx} + e^{ \theta_2^T x^{(i)} } }\begin{bmatrix}e^{ \theta_1^T x } \\e^{ \theta_2^T x }\end{bmatrix}\end{align}

\begin{align}h(x) &=\frac{1}{ e^{\vec{0}^Tx} + e^{ (\theta_2-\theta_1)^T x^{(i)} } }\begin{bmatrix}e^{ \vec{0}^T x } \\e^{ (\theta_2-\theta_1)^T x }\end{bmatrix} \\&=\begin{bmatrix}\frac{1}{ 1 + e^{ (\theta_2-\theta_1)^T x^{(i)} } } \\\frac{e^{ (\theta_2-\theta_1)^T x }}{ 1 + e^{ (\theta_2-\theta_1)^T x^{(i)} } }\end{bmatrix} \\&=\begin{bmatrix}\frac{1}{ 1 + e^{ (\theta_2-\theta_1)^T x^{(i)} } } \\1 - \frac{1}{ 1 + e^{ (\theta_2-\theta_1)^T x^{(i)} } } \\\end{bmatrix}\end{align}

## Softmax 回归 vs. k 个二元分类器

#### TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning

2018年03月14日 12.29MB 下载

#### Learning to Track at 100 FPS with Deep Regression Networks 论文理解及应用笔记（一）

2017-03-18 18:49:59

#### Deep Learning (Logistic Regression)

2013-10-01 08:59:09

#### A Deep Regression Architecture with Two-Stage Re-initialization for High Performance Facial Landmark

2017-11-02 16:02:03

#### 【论文笔记】Leveraging Datasets with Varying Annotations for Face Alignment via Deep Regression Network

2016-03-20 17:08:37

#### 【论文笔记】Deep Direct Regression for Multi-Oriented Scene Text Detection

2018-01-11 14:47:09

#### End-to-End Learning of Geometry and Context for Deep Stereo Regression

2017-05-23 17:23:37

#### Occlusion-free Face Alignment: Deep Regression Networks Coupled with De-corrupt AutoEncoders阅读笔记

2016-08-25 11:20:39

#### DDR（Deep Direct Regression ）算法详解

2017-05-25 19:42:30

#### Deep Learning Exercise: Linear Regression

2017-03-28 12:44:45