Basics of Neural Network Programming - Vectorization

The notes when study the coursera class Neural Networks & Deep Learning by Andrew Ng, section vectorization. Share it with you.


Vectorization is the art to get rid of the for loops in the code. The ability to perform vectorization has become a key skill.

 

Figure-1

 

Figure-1 shows two different ways to calculate z=w^{T}x+b of logistic regression. Using for loop will be very time consuming if we have large number of features. While vectorization will be much faster.

Let's illustrate this with a little demo.

Figure-2

For both the vectorized and non-vectorized version output the same value for the variable c. The vectorized version took 1 ms; the explicit for loop took man than 700 ms. So, if you remember to vectorize your code, your code actually runs over 700 times faster. 

A lot of scalable deep learning implementations are done on a GPU (Graphics Processing Unit), but all the demos just did in the jupyter lab were actually run on the CPU. It turns out that both GPU and CPU have parallelization instructions which sometimes called SIMD (Single Instruction Multiple Data). If you use some built in functions like np.dot here, it enables Python numpy to take much better advantage of parallelism and do your computations much faster.

So, the rule of thumb to remember is whenever possible, avoid using explicit for loops.

<end>

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值