001.Tensor

001.Tensor

2024.3.18(人生第一次用Markdown写语法)

1.创建Tensor

​ 对应tensor_create类,创建Tensor有三种方法

1.直接加载其它数据类型

​ 这种数据类型一般是python的六大基本类型

data = [[1, 2], [3, 4]]  # python list
x_data = torch.tensor(data)  #
x_data2 = torch.tensor((1, 2, 3))
# x_data3 = torch.tensor({"a": 5}) # fail
print("x_data2: ", x_data2)

​ PS:一些常用Tensor属性:

#x_data.dtype   查看tensor内的数据类型
#x_data.device  查看tensor存储在cpu或gpu
#x_data.stride  查看步长(数据沿着一个轴在内存上的间距)
#x_data.real    查看实部
#x_data.imag    查看虚部
#x_data.T   转置
#x_data.grad_fn 查看计算反向梯度的函数(pytorch是自动计算梯度的)
#x_data.grad 返回梯度存的对象

2.利用内置函数初始化tensor

​ 初始化全1矩阵或者对角矩阵(如在auto_guard中初始传入就必须是全矩阵)

data = torch.ones(1, 2, 3)
data1 = torch.zeros(1, 3, 4)
data2 = torch.randn(3, 4, 5)
data3 = torch.eye(4, 5)
data4 = torch.randint(5, (2, 10))
print("data type: ", type(data4))
print("data2: ", data4)

3.运用numpy来初始化

PS:计算机中存储矩阵的方法
#data=meta_data+raw_data
#meta_data=shape/dtype/stride/dim/device
#raw_data=data_ptr
#meta存储的矩阵的属性(如元素类型,维度,步长等)
#raw即存储的指针
#由于numpy和tensor都需要对矩阵进行高效连续的运算,但
传统的python中的list在内存中存储数据是离散的,不便于进行数学上的矩阵运算
#所以tensor底层还是运用的numpy中的数据存储方式进行运算

​ numpy和tensor互相转化

import torch
import numpy as np
np_array = np.array([1, 2, 3])
#运用.from_numpy()来将numpy转化为tensor
tensor_numpy = torch.from_numpy(np_array)
#运用.numpy来将tensor转化为numpy
data_numpy = tensor_numpy.numpy()
print("numpy tensor: ", tensor_numpy)

2.tensor.to用法

1.转化数据类型

import torch
import numpy as np
tensor = torch.ones(4, 5)
tensor = tensor.to(torch.float32)

2.转化计算平台

​ 第一种写法,直接转换

import torch
import numpy as np
tensor = torch.ones(4, 5)
tensor_0 = tensor.to(torch.int32).to("cuda:0")  # 数据的搬迁 h2d: h: host d:device(gpu)
print("tensor device: ", tensor_0.device)

​ 第二种写法,比较全面的写法

import torch
import numpy as np
tensor = torch.ones(4, 5)
if torch.cuda.is_available():
    device = torch.device("cuda:0")
else:
    device = torch.device("cpu")
tensor_2 = tensor.to(device)
print(tensor_2.device)

3.tensor.transpose()

PS:关于存储数据在空间上的连续
import torch
import numpy as np
data0 = torch.randn(4, 6)  # 24
data1 = data0.reshape(6, 4)
data2 = data0.view(6, 4)  # reshape 和 view 相同
data3 = data0.transpose(0, 1)  # data uncontiguous
print(data0)
print(data2)
print(data3)

​ 运行结果如下图所示

在这里插入图片描述

import torch
import numpy as np
data0 = torch.randn(4, 6)  # 24
data1 = data0.reshape(6, 4)
data2 = data0.view(6, 4)  # reshape 和 view 相同
data3 = data0.transpose(0, 1)  # data uncontiguous
print(data0.storage())
print(data3.storage())

​ 可见,当运用reshape或者storge后原来数据的存储地址并没有发生变化,但显示print的结果却发生了变化,说明在读取内存上的数据的时候它们并不是连续的,(不是按顺序来访问的),

​ 我们称这种情况为 uncontiguous(不连续),即访问顺序在内存上并不是连续的.

​ 这时候我们再使用.view()方法则会报错

import torch
import numpy as np
data0 = torch.randn(4, 6)  # 24
data1 = data0.reshape(6, 4)
data2 = data0.transpose(0, 1)  # data uncontiguous
data3 = data2.view(4,6)

在这里插入图片描述
​ 因为不连续!
​ 这时候我们可以用.contiguous()将其变成连续的,但这时候相当于重新复制了一份数据

import torch
import numpy as np
data0 = torch.randn(4, 6)  # 24
data1 = data0.transpose(0, 1)  # data uncontiguous
data2 = data1.contiguous()
print(data1.data_ptr())
print(data2.data_ptr())

在这里插入图片描述

  • 7
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
下面的这段python代码,哪里有错误,修改一下:import numpy as np import matplotlib.pyplot as plt import pandas as pd import torch import torch.nn as nn from torch.autograd import Variable from sklearn.preprocessing import MinMaxScaler training_set = pd.read_csv('CX2-36_1971.csv') training_set = training_set.iloc[:, 1:2].values def sliding_windows(data, seq_length): x = [] y = [] for i in range(len(data) - seq_length): _x = data[i:(i + seq_length)] _y = data[i + seq_length] x.append(_x) y.append(_y) return np.array(x), np.array(y) sc = MinMaxScaler() training_data = sc.fit_transform(training_set) seq_length = 1 x, y = sliding_windows(training_data, seq_length) train_size = int(len(y) * 0.8) test_size = len(y) - train_size dataX = Variable(torch.Tensor(np.array(x))) dataY = Variable(torch.Tensor(np.array(y))) trainX = Variable(torch.Tensor(np.array(x[1:train_size]))) trainY = Variable(torch.Tensor(np.array(y[1:train_size]))) testX = Variable(torch.Tensor(np.array(x[train_size:len(x)]))) testY = Variable(torch.Tensor(np.array(y[train_size:len(y)]))) class LSTM(nn.Module): def __init__(self, num_classes, input_size, hidden_size, num_layers): super(LSTM, self).__init__() self.num_classes = num_classes self.num_layers = num_layers self.input_size = input_size self.hidden_size = hidden_size self.seq_length = seq_length self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, num_classes) def forward(self, x): h_0 = Variable(torch.zeros( self.num_layers, x.size(0), self.hidden_size)) c_0 = Variable(torch.zeros( self.num_layers, x.size(0), self.hidden_size)) # Propagate input through LSTM ula, (h_out, _) = self.lstm(x, (h_0, c_0)) h_out = h_out.view(-1, self.hidden_size) out = self.fc(h_out) return out num_epochs = 2000 learning_rate = 0.001 input_size = 1 hidden_size = 2 num_layers = 1 num_classes = 1 lstm = LSTM(num_classes, input_size, hidden_size, num_layers) criterion = torch.nn.MSELoss() # mean-squared error for regression optimizer = torch.optim.Adam(lstm.parameters(), lr=learning_rate) # optimizer = torch.optim.SGD(lstm.parameters(), lr=learning_rate) runn = 10 Y_predict = np.zeros((runn, len(dataY))) # Train the model for i in range(runn): print('Run: ' + str(i + 1)) for epoch in range(num_epochs): outputs = lstm(trainX) optimizer.zero_grad() # obtain the loss function loss = criterion(outputs, trainY) loss.backward() optimizer.step() if epoch % 100 == 0: print("Epoch: %d, loss: %1.5f" % (epoch, loss.item())) lstm.eval() train_predict = lstm(dataX) data_predict = train_predict.data.numpy() dataY_plot = dataY.data.numpy() data_predict = sc.inverse_transform(data_predict) dataY_plot = sc.inverse_transform(dataY_plot) Y_predict[i,:] = np.transpose(np.array(data_predict)) Y_Predict = np.mean(np.array(Y_predict)) Y_Predict_T = np.transpose(np.array(Y_Predict))
05-27

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值