One hidden layer Neural Network - Computing a Neural Network‘s Output

The notes when study the Coursera class by Mr. Andrew Ng "Neural Networks & Deep Learning", section 3.3 "Computing a Neural Network's Output". It shows how to compute NN output via vectorization when there is one training example. Share it with you and hope it helps!

————————————————
Let's see how the Neural Network computes its output when there is only one training example. It's like logistic regression, but repeat a lot of times!

figure-1

 Figure-1 shows how to compute the output of logistic regression.

figure-2

Figure-2 shows how to compute the activation units of hidden layer of NN.

Let's vectorize this:

figure-3

 For one given training example x, we can vectorize the output of hidden layer & output layer as figure-3. Where,

z^{[1]}=\begin{bmatrix} z^{[1]}_{1}\\ z^{[1]}_{2}\\ z^{[1]}_{3}\\ z^{[1]}_{4} \end{bmatrix}\in 4\times1

W^{[1]}=\begin{bmatrix} (w^{[1]}_{1})^{T}\\ (w^{[1]}_{2})^{T}\\ (w^{[1]}_{3})^{T}\\ (w^{[1]}_{4})^{T} \end{bmatrix}\in 4\times3

x= \begin{bmatrix} x_{1}\\ x_{2}\\ x_{3} \end{bmatrix}\in 3\times1

b^{[1]}=\begin{bmatrix} b^{[1]}_{1}\\ b^{[1]}_{2}\\ b^{[1]}_{3}\\ b^{[1]}_{4} \end{bmatrix}\in 4\times1

a^{[1]}=\begin{bmatrix} a^{[1]}_{1}\\ a^{[1]}_{2}\\ a^{[1]}_{3}\\ a^{[1]}_{4} \end{bmatrix}\in 4\times1

z^{[2]}=\begin{bmatrix} z^{[2]}_{1} \end{bmatrix}\in 1\times1

W^{[2]}=\begin{bmatrix} (w^{[2]}_{1})^{T} \end{bmatrix}\in 1\times4

b^{[2]}=\begin{bmatrix} b^{[2]}_{1} \end{bmatrix}\in 1\times1

a^{[2]}=\begin{bmatrix} a^{[2]}_{1} \end{bmatrix}=\hat{y}\in 1\times1

<end>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值