pytorch使用笔记

1.torch.max()

        torch.max(input,0/1) 其中input是一个tensor,返回两个tensor,如果第二个参数是零,返回每行最大值和所在列数,反之对应

a = torch.tensor([[1,5,62,54], [2,6,2,6], [2,65,2,6]])
x=torch.max(a,1)
y=torch.max(a,0)
print(x)
print(y)

2.全连接网对于分类问题的一个例子

import hiddenlayer as hl
import torch
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
optimizer=torch.optim.Adam(mlpc.parameters(),lr=0.01)
    loss_func=nn.CrossEntropyLoss()
    history1=hl.History()
    canvals=hl.Canvas()
    print_step=25
    for epoch in range(15):
        for step,(b_x,b_y) in enumerate(train_loader):
            _,_,output=mlpc(b_x)
            train_loss=loss_func(output,b_y)
            optimizer.zero_grad()#for what?
            train_loss.backward()
            optimizer.step()
            niter=epoch*len(train_loader)+step+1
            if niter %print_step==0:
                _,_,output=mlpc(X_test_t)
                _,pre_lab=torch.max(output,1)
                test_accuracy=accuracy_score(y_test_t,pre_lab)
                history1.log(niter,train_loss=train_loss,test_accuracy=test_accuracy)
                with canvals:
                    canvals.draw_plot(history1["train_loss"])
                    canvals.draw_plot(history1["test_accuracy"])

3.torch.argmax(input,dim)

        输出input指定维度dim上的最大数的角码

import torch
a = torch.tensor([[1,5,62,54], [2,6,2,6], [2,65,2,6]])
print(torch.argmax(a,1))

[1]tensor([2, 1, 1])

4.tensor.view

print("before",x.shape)
x=x.view(x.size(0),-1)#-1代表由另一个参数而定
print("after", x.shape)
#作用相当于把一个多维张量转化为二维

[1]before torch.Size([64, 32, 6, 6])
[2]after torch.Size([64, 1152])

5.torch.squeeze(input,dim=) 和 torch.unsqueeze(input,dim=)

import torch
a = torch.tensor([[[1,5,62,54], [2,6,2,6], [2,65,2,6]],[[1,5,62,54], [2,6,2,6], [2,65,2,6]]],)

print(a.size())
b=torch.unsqueeze(a,dim=0)
print(b.size())
c=torch.squeeze(b,dim=0)
print((c.size()))

[1]torch.Size([2, 3, 4])
[2]torch.Size([1, 2, 3, 4])
[3]torch.Size([2, 3, 4])

torch.unsqueeze()在指定维数处插入新维数,torch.squeeze(),无dim时默认去掉所有维数大小是一的维数,有dim时如果该维数是一则去掉,反之保留。

6.神经网络中的forward()方法

Traceback (most recent call last):
  File "C:/Users/龙/PycharmProjects/pythonProject3/neural_network/emotion_classfication.py", line 277, in <module>
    train_loss,train_acc=train_epoch(model,train_iter,optimizer,criterion)
  File "C:/Users/龙/PycharmProjects/pythonProject3/neural_network/emotion_classfication.py", line 243, in train_epoch
    pre=model(batch.text[0]).squeeze(1)
  File "D:\conda\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "D:\conda\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 175, in _forward_unimplemented
    raise NotImplementedError
NotImplementedError

今天在写神经网络类时候,不小心把forward写成forwar,结果在训练时不能直接用model(b_X)

而需要用model.forwar(b_x)。奇怪为什么写对的时候不用 写forward,按理说在类之外调用方法一定要是 .method()我想应该是pytoch的缘故

7.tensor.gather(dim,index)

将张量tensor在维度dim上,按index抽取元素组成一个新的张量,例如

RuntimeError: gather_out_cpu(): Expected dtype int64 for index

x=torch.randn((7,3))
index0=torch.Tensor([[0],[1],[2],[0],[1],[2],[0]]).long()
index1=torch.Tensor([[6,3,2]]).long()
print(x)
y=x.gather(0,index1)
z=x.gather(1,index0)
print(y)
print(z)

输出:

tensor([[-0.5529, -3.1217, -0.1073],
        [-1.7415, -2.2135, -0.0471],
        [ 1.1015, -0.0090, -1.8197],
        [ 1.6703,  0.4852, -1.8249],
        [-0.7406, -0.6210, -0.9830],
        [ 0.3919, -0.3409, -1.0493],
        [-0.2657,  0.0174, -0.8856]])
tensor([[-0.2657,  0.4852, -1.8197]])
tensor([[-0.5529],
        [-2.2135],
        [-1.8197],
        [ 1.6703],
        [-0.6210],
        [-1.0493],
        [-0.2657]])

注意:1.index的类型必须是int64的tensor

2. dim中0表示每列一个,1表示每行1个

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值