torch.split(tensor, split_size, dim=0)

torch.split(tensor, split_size, dim=0)

说明:将输入张量分割成相等形状的chunks(如果可分)。如果沿指定维的张量形状大小不能被整分,则最后一块会小于其他分块。

参数

tensor(Tensor) -- 待分割张量
split_size(int) -- 单个分块的形状大小
dim(int) -- 沿着此维进行分

>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.1135,  0.5779, -0.9737, -0.0718],
        [ 0.4136,  1.1577,  0.5689, -0.1970],
        [ 1.4281,  0.3540,  1.4346, -0.1444]])
>>> torch.split(x, 2, 1)
(tensor([[0.1135, 0.5779],
        [0.4136, 1.1577],
        [1.4281, 0.3540]]), tensor([[-0.9737, -0.0718],
        [ 0.5689, -0.1970],
        [ 1.4346, -0.1444]]))
>>> torch.split(x, 2, 0)
(tensor([[ 0.1135,  0.5779, -0.9737, -0.0718],
        [ 0.4136,  1.1577,  0.5689, -0.1970]]), tensor([[ 1.4281,  0.3540,  1.4346, -0.1444]]))


原文链接:https://blog.csdn.net/gyt15663668337/article/details/91345951

 

 

import torch import torch.nn as nn from torchtext.datasets import AG_NEWS from torchtext.data.utils import get_tokenizer from torchtext.vocab import build_vocab_from_iterator # 数据预处理 tokenizer = get_tokenizer('basic_english') train_iter = AG_NEWS(split='train') counter = Counter() for (label, line) in train_iter: counter.update(tokenizer(line)) vocab = build_vocab_from_iterator([counter], specials=["<unk>"]) word2idx = dict(vocab.stoi) # 设定超参数 embedding_dim = 64 hidden_dim = 128 num_epochs = 10 batch_size = 64 # 定义模型 class RNN(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim): super(RNN, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.rnn = nn.RNN(embedding_dim, hidden_dim, batch_first=True) self.fc = nn.Linear(hidden_dim, 4) def forward(self, x): x = self.embedding(x) out, _ = self.rnn(x) out = self.fc(out[:, -1, :]) return out # 初始化模型、优化器和损失函数 model = RNN(len(vocab), embedding_dim, hidden_dim) optimizer = torch.optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss() # 定义数据加载器 train_iter = AG_NEWS(split='train') train_data = [] for (label, line) in train_iter: label = torch.tensor([int(label)-1]) line = torch.tensor([word2idx[word] for word in tokenizer(line)]) train_data.append((line, label)) train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True) # 开始训练 for epoch in range(num_epochs): total_loss = 0.0 for input, target in train_loader: model.zero_grad() output = model(input) loss = criterion(output, target.squeeze()) loss.backward() optimizer.step() total_loss += loss.item() * input.size(0) print("Epoch: {}, Loss: {:.4f}".format(epoch+1, total_loss/len(train_data)))改错
05-25
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值