
PyTorch
Jump1024
AGI Never Stop
-
原创 pytorch add_
# a = a + 4 * 5import torcha = torch.tensor([1,2,3])a.data.add_(torch.tensor(4),torch.tensor(5))print(a) # tensor([21, 22, 23])2020-09-22 20:40:05153
0
-
原创 pytorch addcdiv 和 addcdiv_
# a = a + 4 / 2import torcha = torch.tensor([1,2,3])a.addcdiv(torch.tensor(4),torch.tensor(2))print(a) # tensor([1, 2, 3]) # 值不加a.data.addcdiv_(torch.tensor(4),torch.tensor(2))print(a) # tensor([3, 4, 5]) # 值加a = a.addcdiv(torch.tensor(4),torch.t2020-09-22 20:37:27149
0
-
原创 sh run安装cuda失败,Finished with code: 256
要先禁用nouveau保证lsmod |grep nouveau不显示信息2020-06-20 21:01:273882
0
-
原创 pytorch pad 实例
import torchtensor = torch.Tensor([[[1,1],[2,2],[3,3]],[[4,4],[5,5],[6,6]]])print(tensor.shape)print(tensor)pad_tensor = torch.constant_pad_nd(tensor,(0,0,0,2))print(pad_tensor.shape)print(pad_tensor)print结果:torch.Size([2, 3, 2])tensor([[[1., 1.]2020-05-09 19:30:15289
0
-
原创 multi-label分类,loss一直增大
label为[batch_size, num_class]logits为[batch_size, num_class]每个label为比如[0,0,1,0,0,0,1,0,1,0],就是10类有3类正确不能用tf.nn.softmax_cross_entropy_with_logitsPytorch使用torch.nn.BCElossTensorflow使用tf.losses.sigmo...2020-02-25 20:13:35277
0
-
原创 一个pytorch的pointer net实现
https://github.com/pcyin/tranX/blob/master/model/pointer_net.py2019-12-16 11:20:29226
0
-
原创 一个带copy机制的seq2seq的pytorch实现
https://github.com/pcyin/tranX/blob/master/model/seq2seq_copy.py2019-12-16 11:19:38334
0
-
原创 一种输入[batch, seq_len1, hidden_dim]输出[batch, seq_len2, hidden_dim]的self-attention的pytorch实现
class Attention(nn.Module): """ inputs是[batch, seq_len1, hidden_dim] labels_num是seq_len2 """ def __init__(self, labels_num, hidden_size): super(Attention, self).__init__()...2019-12-16 11:17:23231
0
-
原创 pytorch 三维one-hot tensor的制作
import torchbatch_size = 2sequence_len = 3hidden_dim = 5x = torch.zeros(batch_size, sequence_len, hidden_dim).scatter_(dim=-1, index=torch.LongTensor([[[2],[2],[1]]...2019-10-14 09:24:30632
0
-
原创 pytorch 报错 RuntimeError: Invalid index in scatter at
很大可能是因为index的值超出了范围,比如import torchbatch_size = 2hidden_dim = 5x = torch.zeros(batch_size, hidden_dim).scatter_(dim=-1, index=torch.LongTensor([[2],[1]]), ...2019-10-14 09:20:391407
0
-
原创 pytorch,初始化tensor
import torchinput_tensor = torch.tensor([1,2,3,4,5])input_tensor = torch.tensor([[1,2,3,4,5],[6,7,8,9,10]])2019-03-08 17:24:521955
0
-
原创 pytorch 的KL divergence的实现
import torch.nn.functional as F# p_logit: [batch,dim0]# q_logit: [batch,dim0]def kl_categorical(p_logit, q_logit): p = F.softmax(p_logit, dim=-1) _kl = torch.sum(p * (F.log_softmax(p_logit...2019-05-16 14:23:567049
0
-
原创 Virtual Adversarial Training的pytorch实现
def kl_categorical(p_logit, q_logit): p = F.softmax(p_logit, dim=-1) _kl = torch.sum(p * (F.log_softmax(p_logit, dim=-1) - F.log_softmax(q_logit, dim=-1)), 1) retu...2019-05-20 15:34:06651
0
-
原创 Adversarial Training的pytorch的实现
def at_loss(embedder, encoder, clf, batch, perturb_norm_length=5.0): embedded = embedder(batch) # [seq_len,batch,hidden_dim] embedded.retain_grad() ce = F.cross_entropy((clf(encoder(embedd...2019-05-20 15:31:591125
0
-
原创 pytorch pack_padded_sequence 实例 使用
https://github.com/guotong1988/sqlova-debug-read/blob/master/sqlova/utils/utils_wikisql.py#L2232019-05-08 11:58:22305
0
-
原创 pytorch torch.gather 实例
import torchinput_tensor = torch.tensor([[1,2],[3,4],[5,6]])gather_input = torch.tensor([[0,0],[1,0],[1,1]])output_tensor = torch.gather(input_tensor, 1, gather_input)print(output_tensor)tensor(...2019-03-08 17:58:12278
0
-
原创 pytorch,nonzero 实例 使用
import torchinput_tensor = torch.tensor([1,2,3,4,5])mask = input_tensor>3print(mask)indexes = mask.nonzero().squeeze()print(indexes)tensor([0, 0, 0, 1, 1], dtype=torch.uint8)tensor([3, 4])...2019-03-08 17:48:151291
0
-
原创 pytorch,筛选出一定范围的值
import torchinput_tensor = torch.tensor([1,2,3,4,5])print(input_tensor>3)mask = (input_tensor>3).nonzero()print(mask)print(input_tensor.index_select(0,mask))tensor([0, 0, 0, 1, 1], dtype=...2019-03-08 17:47:042429
0
-
原创 pytorch的Entropy Minimization (EM) 的实现
# p_logit: [batch,class_num]def entropy_loss(p_logit): p = F.softmax(p_logit, dim=-1) return -1 * torch.sum(p * F.log_softmax(p_logit, dim=-1)) / p_logit.size()[0]2019-05-16 14:49:08851
3
-
原创 pytorch multinomial 报错 device-side assert triggered
/pytorch/aten/src/THC/THCTensorRandom.cuh:187: void sampleMultinomialOnce(long *, long, int, T *, T *, int, int) [with T = float, AccT = float]: block: [6,0,0], thread: [5,0,0] Assertion `THCNumerics&...2019-06-14 16:45:43973
0
-
原创 OpenNMT做端对端的接口使用
安装:其中requirements.txt里安装的是很新的torchtextgit clone --branch 0.9.1 https://github.com/OpenNMT/OpenNMT-py.gitcd OpenNMT-pypip install -r requirements.txt cd ..预处理:其中src-train.txt和tgt-train.txt为原始英文...2019-07-09 15:16:29902
8
-
原创 PyTorch contiguous 的概念
x = torch.Tensor(2,3)y = x.permute(1,0)y.view(-1) # 报错,因为x和y指针指向相同y = x.permute(1,0).contiguous()y.view(-1) # OK2017-12-06 14:05:489059
0
-
原创 pytorch one-hot tensor的制作
import torchbatch_size = 2hidden_dim = 5x = torch.zeros(batch_size, hidden_dim).scatter_(dim=-1, index=torch.LongTensor([[2],[1]]), va...2019-10-11 16:30:59285
0
-
原创 一个self attention的pytorch实现
class SelfAttention(nn.Module): """ scores each element of the sequence with a linear layer and uses the normalized scores to compute a context over the sequence. """ def __init__(sel...2019-09-12 16:22:445581
14
-
原创 sqlova 代码走读
输入BERT的方式 https://github.com/naver/sqlova/blob/master/sqlova/utils/utils_wikisql.py在generate_inputs方法[CLS] question_word_1,question_word_2,...question_word_n [SEP] header_1 [SEP] header_2 [SEP] ... ...2019-09-10 11:06:55221
0
-
原创 pytorch attend操作 代码
# seq: [batch,sel_len,hidden_dim]# cond: [batch,hidden_dim]# lens: [batch] def attend(seq, cond, lens): """ attend over the sequences `seq` using the condition `cond`. """ scores = ...2019-09-09 08:36:35100
0
-
原创 pytorch 欧式距离 euclidean distance 实现
import torch.nn.functional as Fdistance = F.pairwise_distance(rep_a, rep_b, p=2)其中rep_a和rep_a为[batch_size,hidden_dim]2019-08-21 15:11:2610341
0
-
原创 Multiple Negatives Ranking Loss 的pytorch实现
摘自https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/losses.py2019-08-05 11:24:48797
0
-
原创 pytorch 以cosine为loss训练
其中rep_a为[batch_size,hidden_dim]rep_b为[batch_size,hidden_dim]labels为[batch_size]摘自https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/models/TransformerModel.py...2019-08-05 10:47:502331
0
-
原创 pytorch,index_select实例
import torchinput_tensor = torch.tensor([1,2,3,4,5])print(input_tensor.index_select(0,torch.tensor([0,2,4])))input_tensor = torch.tensor([[1,2,3,4,5],[6,7,8,9,10]])print(input_tensor.index_select...2019-03-08 17:23:303127
0
-
原创 pytorch,torch.full 实例
>>> torch.full((2, 3), 3.141592)tensor([[ 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416]])2019-03-08 14:46:156888
2
-
原创 pytorch的tf.slice
import torchA_idx = torch.LongTensor([0, 2]) # the index vectorB = torch.LongTensor([[1, 2, 3], [4, 5, 6]])C = B.index_select(1, A_idx)# 1 3# 4 62018-01-10 14:26:131977
0
-
原创 pytorch 重复采样 与 非重复采样
import torchimport torch.nn.functional as Ffrom torch.autograd import *a = Variable(torch.FloatTensor([[0,0,0,0,0,0,90,100]]))b=F.softmax(a,-1)print(b.multinomial()) # 7 或 6print(b.multinomial2018-01-15 17:20:082129
0
-
原创 pytorch multinomial 实例
就是抽样a = torch.FloatTensor([[0,0,0,0,0,0,0,100]])b=softmax(a)b.multinomial()Variable containing: 7 [torch.LongTensor of size 1x1]b.multinomial()Variable containing: 7 [torch.LongTen2017-12-31 20:07:253550
0
-
原创 torchvision pip安装失败之后的源码安装
git clone https://github.com/pytorch/vision.git git checkout 0.2.0 pip install pillow python setup.py install2018-01-15 16:06:373556
0
-
原创 pytorch的reinforce算法 官方文档
http://pytorch.org/docs/0.3.0/distributions.htmlprobs = policy_network(state)m = Categorical(probs)action = m.sample() # 抽样一个actionnext_state, reward = env.step(action) # 得到一个rewardloss = -m.l2018-01-05 11:16:052202
0
-
原创 pytorch的tf.transpose
permute>>> img_nhwc = torch.randn(10, 480, 640, 3)>>> img_nhwc.size()torch.Size([10, 480, 640, 3])>>> img_nchw = img_nhwc.permute(0, 3, 1, 2)>>> img_nchw.size()torch.Size([10, 3, 480, 640])2017-12-18 12:56:514788
0
-
原创 PyTorch的dropout设置训练和测试模式
class Net(nn.Module): … model = Net() …model.train() # 把module设成训练模式,对Dropout和BatchNorm有影响model.eval() # 把module设置为预测模式,对Dropout和BatchNorm模块有影响2017-12-05 20:40:1311808
2
-
原创 PyTorch torch.stack实例
不是concat的意思import torcha = torch.ones([1,2])b=torch.ones([1,2])torch.stack([a,b],1)(0 ,.,.) = 1 1 1 1[torch.FloatTensor of size 1x2x2]2017-12-05 11:01:007201
0
-
原创 PyTorch的concat也就是torch.cat实例
import torcha = torch.ones([1,2])b = torch.ones([1,2])torch.cat([a,b],1) 1 1 1 1[torch.FloatTensor of size 1x4]2017-12-05 11:14:3856542
0