GPU使用

 

GPU

 

 

In [1]:
import mxnet as mx
from mxnet import nd
from mxnet.gluon import nn
In [2]:
mx.cpu(),mx.gpu(),mx.gpu(1)
Out[2]:
(cpu(0), gpu(0), gpu(1))
In [3]:
x = nd.array([1,2,3])
x
Out[3]:
[1. 2. 3.]
<NDArray 3 @cpu(0)>
In [4]:
x.context
Out[4]:
cpu(0)
In [5]:
!nvidia-smi
 
Wed Nov 28 15:42:11 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN X (Pascal)    Off  | 00000000:02:00.0 Off |                  N/A |
| 33%   57C    P2    58W / 250W |   3467MiB / 12189MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  TITAN X (Pascal)    Off  | 00000000:03:00.0 Off |                  N/A |
| 39%   63C    P2    63W / 250W |   1280MiB / 12189MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  TITAN X (Pascal)    Off  | 00000000:83:00.0 Off |                  N/A |
| 44%   75C    P2    86W / 250W |   1320MiB / 12189MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  TITAN X (Pascal)    Off  | 00000000:84:00.0 Off |                  N/A |
| 43%   74C    P2    84W / 250W |   1719MiB / 12189MiB |      8%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      6577      C   python                                       209MiB |
|    0      8489      C   python                                      2329MiB |
|    0      9813      C   python                                       211MiB |
|    0     29538      C   python                                       173MiB |
|    1     17615      C   python                                       591MiB |
|    1     31208      C   python                                       679MiB |
|    2      6577      C   python                                       547MiB |
|    2      9813      C   python                                       763MiB |
|    3      6577      C   python                                       547MiB |
|    3      9813      C   python                                       763MiB |
|    3     15351      C   python                                       399MiB |
+-----------------------------------------------------------------------------+
In [6]:
a = nd.array([1,2,3],ctx=mx.gpu())
In [7]:
a
Out[7]:
[1. 2. 3.]
<NDArray 3 @gpu(0)>
In [8]:
b = nd.random.uniform(shape=(2,3),ctx=mx.gpu(1))
b
Out[8]:
[[0.59119    0.313164   0.76352036]
 [0.9731786  0.35454726 0.11677533]]
<NDArray 2x3 @gpu(1)>
In [9]:
a.context
Out[9]:
gpu(0)
In [10]:
b.context
Out[10]:
gpu(1)
In [11]:
y = x.copyto(mx.gpu())
y
Out[11]:
[1. 2. 3.]
<NDArray 3 @gpu(0)>
In [12]:
z = x.as_in_context(mx.gpu())
z
Out[12]:
[1. 2. 3.]
<NDArray 3 @gpu(0)>
In [13]:
y.as_in_context(mx.gpu()) is y
Out[13]:
True
In [14]:
y.copyto(mx.gpu()) is y
Out[14]:
False
In [15]:
z
Out[15]:
[1. 2. 3.]
<NDArray 3 @gpu(0)>
In [16]:
y
Out[16]:
[1. 2. 3.]
<NDArray 3 @gpu(0)>
In [17]:
(z + 2).exp()
Out[17]:
[ 20.085537  54.59815  148.41316 ]
<NDArray 3 @gpu(0)>
In [18]:
(z+2).exp()*y
Out[18]:
[ 20.085537 109.1963   445.2395  ]
<NDArray 3 @gpu(0)>
 

gluon的GPU计算

In [19]:
net = nn.Sequential()
In [20]:
net.add(nn.Dense(1))
In [21]:
net.initialize(ctx=mx.gpu())
In [22]:
net(y)
Out[22]:
[[0.0068339 ]
 [0.01366779]
 [0.02050169]]
<NDArray 3x1 @gpu(0)>
In [23]:
net[0].weight.data()
Out[23]:
[[0.0068339]]
<NDArray 1x1 @gpu(0)>
In [ ]:
 

转载于:https://www.cnblogs.com/TreeDream/p/10032524.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值