pytorch
小饼干超人
这个作者很懒,什么都没留下…
展开
-
pad_sequence —— 填充句子到相同长度
torch.nn.utils.rnn.pad_sequence(sequences, batch_first=False, padding_value=0)用padding_value 填充一系列可变长度的tensor,把它们填充到等长Example>>> from torch.nn.utils.rnn import pad_sequence>>> a...原创 2019-04-23 10:50:20 · 14548 阅读 · 2 评论 -
conda激活环境失败 Could not find conda environment ,查看envs后发现其实在里面
could not find conda environment原创 2022-07-21 20:34:02 · 6614 阅读 · 0 评论 -
jupyter notebook 如何指定虚拟环境
1.在终端上先启用环境$ conda activate py362.启用环境以后,再输入jupyter notebook3.在新打开的notebook页面,进行import,就可以使用py36环境下的包原创 2021-06-08 19:36:40 · 361 阅读 · 2 评论 -
java/scala/python打印分割线 (字符串的自复制)
javaimport org.apache.commons.lang3.StringUtils;public class TestOperator { public static void main(String[] args) { System.out.println(StringUtils.repeat("-", 10)); }}scalascala> print("-" * 10)----------Python>>> print("-"原创 2021-04-26 17:57:02 · 571 阅读 · 0 评论 -
【已解决】CUDNN_STATUS_BAD_PARAM and LSTM/RNN or any recurrent structure
解决方法检查输入的数据是否为float32,如果不是的话,要将其改为float32参考:https://github.com/pytorch/pytorch/issues/2267原创 2019-03-28 10:38:09 · 4025 阅读 · 0 评论 -
pytorch学习资源
the-incredible-pytorch原创 2019-04-09 10:54:54 · 141 阅读 · 0 评论 -
pad_sequence在pytorch中的使用
>>> from torch.nn.utils.rnn import pad_sequence>>> input_x =[[1,2,3],[4,5,6,7,8],[8,9]]>>> norm_data_pad = pad_sequence([torch.from_numpy(np.array(x)) for x in input_x], b...原创 2019-04-23 13:33:40 · 11474 阅读 · 2 评论 -
一文弄懂pytorch中的pack和pad
链接:packing-unpacking-pytorch-minimal-tutorialsorry,翻译有空再补原创 2019-05-21 21:11:09 · 1271 阅读 · 1 评论 -
pytorch 如何拼接 迭代的 tensor
文章目录需求解决需求将通过for循环得到的多个tensor,最终拼接起来。解决>>> import pytorch>>> input = torch.randn(2,5)>>> input.unsqueeze_(1)tensor([[[-0.1127, 0.1031, -1.7152, -0.1951, 0.8266]], ...原创 2019-07-23 21:32:08 · 11545 阅读 · 3 评论 -
pytorch.range() 和 pytorch.arange() 的区别
>>> y=torch.range(1,6)>>> ytensor([1., 2., 3., 4., 5., 6.])>>> y.dtypetorch.float32>>> z=torch.arange(1,6)>>> ztensor([1, 2, 3, 4, 5])>>>...原创 2019-03-26 20:57:19 · 64455 阅读 · 1 评论 -
numpy repeat 和 torch repeat的区别
torch repeat>>> x=torch.randn(1,2)>>> xtensor([[1.2059, 2.4903]])>>> x.repeat(3,1)tensor([[1.2059, 2.4903], [1.2059, 2.4903], [1.2059, 2.4903]])numpy ...原创 2019-03-26 17:32:30 · 2003 阅读 · 1 评论 -
pytorch 查看tensor的数据类型
import torchx=torch.Tensor([1,2])print('x: ',x)print('type(x): ',type(x))print('x.dtype: ',x.dtype) # x的具体类型y=x.int()print('y: ',y)print('type(y): ',type(y))print('y.dtype: ',y.d...原创 2019-02-22 14:58:15 · 36650 阅读 · 0 评论 -
torch.nn.Conv2d()函数详解
import torchx = torch.randn(2,1,7,3)conv = torch.nn.conv2d(1,8,(2,3))res = conv(x)print(res.shape) # shape = (2, 8, 6, 1)输入:x[ batch_size, channels, height_1, width_1 ]batch_size ...原创 2019-02-21 09:52:02 · 56080 阅读 · 14 评论 -
torch.nn 和torch.nn.functional的区别浅析
举例:拿maxpool操作来说import torchx_input = torch.randn(20, 16, 50)# 先定义一个model,再把数据传进去# pool of size=3, stride=2m = torch.nn.MaxPool1d(3, stride=2)x_output = m(x_input)print(x_output.shape) # torc...原创 2019-02-21 10:47:51 · 1238 阅读 · 0 评论 -
torch.nn.MSELoss()及torch.optim.SGD的理解
文章目录一个简单的例子MSELoss的直观实现方法SGD的直观实现方法一个简单的例子import torchimport torch.nn as nnx = torch.randn(10, 3)y = torch.randn(10, 2)# Build a fully connected layer.linear = nn.Linear(3, 2)# Build loss fu...原创 2019-03-09 20:54:14 · 4629 阅读 · 0 评论 -
从linear_regression中看detach()的作用
文章目录linear_regression.pydetach()的用法参考linear_regression.pyimport torchimport torch.nn as nnimport numpy as npimport matplotlib.pyplot as plt# Hyper-parametersinput_size = 1output_size = 1num_...原创 2019-03-09 23:51:41 · 865 阅读 · 0 评论 -
【已解决】TypeError: __init__() takes 1 positional argument but 2 were given
文章目录convolutional_neural_network代码代码报错出错原因参考convolutional_neural_network代码import torchimport torch.nn as nnimport torchvisionimport torchvision.transforms as transformsdevice = torch.device('cu...原创 2019-03-10 12:50:58 · 77984 阅读 · 13 评论 -
torch.nn.LSTM()函数维度详解
import torchimport torch.nn as nnlstm = nn.LSTM(10, 20, 2)x = torch.randn(5, 3, 10)h0 = torch.randn(2, 3, 20)c0 = torch.randn(2, 3, 20)output, (hn, cn)=lstm(x, (h0, c0))# output.shape torch.Si...原创 2019-03-14 21:23:01 · 18216 阅读 · 4 评论 -
subprocess 模块及 bash -c command
>>> import subprocess>>> out = subprocess.check_output(['ls', '-l'])>>> outb'\xe6\x80\xbb\xe7\x94\xa8\xe9\x87\x8f 36\n-rw-rw-r-- 1 root root 3713 Mar 15 09:54 dataloader....原创 2019-03-15 11:11:44 · 1406 阅读 · 0 评论 -
model.zero_grad()与optimizer.zero_grad()
有两种方式直接把模型的参数梯度设成0:model.zero_grad()optimizer.zero_grad()#当optimizer=optim.Optimizer(model.parameters())时,两者等效如果想要把某一Variable的梯度置为0,只需用以下语句:Variable.grad.data.zero_()作者:CodeTutor来源:CSDN原文:ht...转载 2019-03-16 15:48:53 · 7176 阅读 · 0 评论 -
torch.nn.Linear()函数的理解
import torchx = torch.randn(128, 20) # 输入的维度是(128,20)m = torch.nn.Linear(20, 30) # 20,30是指维度output = m(x)print('m.weight.shape:\n ', m.weight.shape)print('m.bias.shape:\n', m.bias.shape)print...原创 2019-02-21 15:10:19 · 146959 阅读 · 19 评论