原文及翻译:
half()
方法: half()
Casts all floating point parameters and buffers to half datatype.
将所有的浮点参数和缓冲转换为半浮点(half)数据类型.
Returns 函数返回
self 自身self
Return type 返回类型
Module 模块Module类型
代码实验展示:
import torch
import torch.nn as nn
torch.manual_seed(seed=20200910)
class Model(torch.nn.Module):
def __init__(self):
super(Model,self).__init__()
self.conv1=torch.nn.Sequential( # 输入torch.Size([64, 1, 28, 28])
torch.nn.Conv2d(1,64,kernel_size=3,stride=1,padding=1),
torch.nn.ReLU(), # 输出torch.Size([64, 64, 28, 28])
)
register_buffer_in_temp = torch.randn(4,6)
self.register_buffer('register_buffer_in', register_buffer_in_temp)
def forward(self,x):
pass
print('cuda(GPU)是否可用:',torch.cuda.is_available())
print('torch的版本:',torch.__version__)
model = Model() #.cuda()
print('初始化之后模型修改之前'.center(100,"-"))
print('调用named_buffers()'.center(100,"-"))
for name, buf in model.named_buffers():
print(name,'-->',buf.type(),'-->',buf.dtype,'-->',buf.shape)
print('调用named_parameters()'.center(100,"-"))
for name, param in model.named_parameters():
print(name,'-->',param.type(),'-->',param.dtype,'-->',param.shape)
print('调用state_dict()'.center(100,"-"))
for k, v in model.state_dict().items():
print(k, '-->', v.type(),'-->', v.dtype,'-->', v.shape)
model.half()
print('模型初始化以及修改之后'.center(100,"-"))
print('调用named_buffers()'.center(100,"-"))
for name, buf in model.named_buffers():
print(name,'-->',buf.type(),'-->',buf.dtype,'-->',buf.shape)
print('调用named_parameters()'.center(100,"-"))
for name, param in model.named_parameters():
print(name,'-->',param.type(),'-->',param.dtype,'-->',param.shape)
print('调用state_dict()'.center(100,"-"))
for k, v in model.state_dict().items():
print(k, '-->', v.type(),'-->', v.dtype,'-->', v.shape)
控制台输出结果:
Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。
尝试新的跨平台 PowerShell https://aka.ms/pscore6
加载个人及系统配置文件用了 878 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '64740' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test2.py'
cuda(GPU)是否可用: True
torch的版本: 1.2.0+cu92
--------------------------------------------初始化之后模型修改之前---------------------------------------------
-----------------------------------------调用named_buffers()------------------------------------------
register_buffer_in --> torch.FloatTensor --> torch.float32 --> torch.Size([4, 6])
----------------------------------------调用named_parameters()----------------------------------------
conv1.0.weight --> torch.FloatTensor --> torch.float32 --> torch.Size([64, 1, 3, 3])
conv1.0.bias --> torch.FloatTensor --> torch.float32 --> torch.Size([64])
-------------------------------------------调用state_dict()-------------------------------------------
register_buffer_in --> torch.FloatTensor --> torch.float32 --> torch.Size([4, 6])
conv1.0.weight --> torch.FloatTensor --> torch.float32 --> torch.Size([64, 1, 3, 3])
conv1.0.bias --> torch.FloatTensor --> torch.float32 --> torch.Size([64])
--------------------------------------------模型初始化以及修改之后---------------------------------------------
-----------------------------------------调用named_buffers()------------------------------------------
register_buffer_in --> torch.HalfTensor --> torch.float16 --> torch.Size([4, 6])
----------------------------------------调用named_parameters()----------------------------------------
conv1.0.weight --> torch.HalfTensor --> torch.float16 --> torch.Size([64, 1, 3, 3])
conv1.0.bias --> torch.HalfTensor --> torch.float16 --> torch.Size([64])
-------------------------------------------调用state_dict()-------------------------------------------
register_buffer_in --> torch.HalfTensor --> torch.float16 --> torch.Size([4, 6])
conv1.0.weight --> torch.HalfTensor --> torch.float16 --> torch.Size([64, 1, 3, 3])
conv1.0.bias --> torch.HalfTensor --> torch.float16 --> torch.Size([64])
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>