PyTorch
lwycc233
坚持总会有回报!
展开
-
PyTorch-Numpy和Torch对比
import torch as timport numpy as npnp_data = np.arange(6).reshape(2,3)torch_data = t.from_numpy(np_data) # 将array类型转化成tensor类型tensor2array = torch_data.numpy(); # 将tensor类型转化成array类型print("--...原创 2019-04-02 19:21:09 · 330 阅读 · 0 评论 -
4.3 优化器
实现随机梯度下降算法(SGD)应掌握:优化方法的基本使用方法如何对模型的不同部分设置不同的学习率如何调整学习率import torch as tfrom torch.autograd import Variable as Vfrom torch import nn#实现随机梯度下降算法(SGD)#首先定义一个LeNet网络class Net(nn.Module): ...原创 2019-05-10 15:24:48 · 119 阅读 · 0 评论 -
图像卷积
1. tf.nn.conv2d是TensorFlow里面实现卷积的函数tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, name=None)除去name参数用以指定该操作的name,与方法有关的一共五个参数:第一个参数input:指需要做卷积的输入图像,它要求是一个Tensor,具有[batch, in...原创 2019-05-07 15:52:32 · 225 阅读 · 0 评论 -
5.1数据处理-数据加载
学习如何自定义自己的数据集,并可以依次获取import torch as tfrom torch.utils import dataimport osfrom PIL import Imageimport numpy as npclass DogCat(data.Dataset): def __init__(self,root): imgs = os.lis...原创 2019-05-13 20:31:32 · 162 阅读 · 0 评论 -
4.2.4 损失函数
交叉熵损失CrossEntropylossimport torch as tfrom torch import nnfrom torch.autograd import Variable as V#batch_size = 3,计算对应每个类别的分数(只有两个类别)score = V(t.randn(3,2))print(score)#三个样本分别属于1,0,1类,label必须...原创 2019-05-08 22:09:10 · 125 阅读 · 0 评论 -
4.2.3 循环神经网络层
1. nn.LSTM()参数input_size: The number of expected features in the input `x`hidden_size: The number of features in the hidden state `h`num_layers: Number of recurrent layers. E.g., setting ``num_laye...原创 2019-05-08 21:54:04 · 275 阅读 · 0 评论 -
4.2.2 激活函数
1.最常用的激活函数ReLU数学表达式: ReLU(x) = max(0,x)import torch as tfrom torch import nnfrom torch.autograd import Variable as Vrelu = nn.ReLU(inplace=True) # inplace = True代表它会把输出直接覆盖到输入中,这样可以节省内存/显存‘i...原创 2019-05-08 20:59:03 · 222 阅读 · 0 评论 -
第四章-多层感知机
两个全连接层,采用sigmoid函数作为激活函数import torch as tfrom torch import nnfrom torch.autograd import Variable as V#全连接层class Linear(nn.Module): # 继承nn.Module def __init__(self,in_features, out_features):...原创 2019-05-06 10:14:50 · 239 阅读 · 0 评论 -
第四章-使用nn.Module实现全连接层
import torch as tfrom torch import nnfrom torch.autograd import Variable as Vclass Linear(nn.Module): # 继承nn.Module def __init__(self,in_features, out_features): print("调用构造函数") ...原创 2019-05-06 09:41:12 · 760 阅读 · 0 评论 -
第三章-利用autograd/Variable实现线性回归
import torch as tfrom torch.autograd import Variable as Vfrom matplotlib import pyplot as pltfrom IPython import display#为了在不同的计算机上运行时下面的输出一致,设置随机种子,每次得到的随机数是固定的t.manual_seed(1000)def get_fake...原创 2019-05-05 22:10:30 · 319 阅读 · 0 评论 -
第三章-利用tensor实现线性回归
import torch as tfrom matplotlib import pyplot as pltfrom IPython import displayt.manual_seed(1000) # 设置随机种子def get_fake_data(batch_size = 8): x = t.randn(batch_size,1)*20 # 随机数。batch_size行 1列...原创 2019-05-05 21:23:02 · 179 阅读 · 0 评论 -
PyTorch-Regression回归
import torchfrom torch.autograd import Variableimport torch.nn.functional as Fimport matplotlib.pyplot as pltx = torch.unsqueeze(torch.linspace(-1,1,100),dim = 1) #将一维变成二维 #调用方法:linspace(x1,x...原创 2019-04-02 21:26:03 · 210 阅读 · 0 评论 -
PyTorch-什么是激励函数(深度学习)
1. 推荐神经网络的激励函数:卷积神经网络:relu循环神经网络:relu或者tanh2. 使用PyTorch实现import torch as timport torch.nn.functional as Ffrom torch.autograd import variableimport matplotlib.pyplot as plt#fake datax = t....原创 2019-04-02 20:18:34 · 210 阅读 · 0 评论 -
PyTorch-Variable变量
Variable和Tensor的形式转化requires_grad参数涵义打印Variable梯度import torch as tfrom torch.autograd import Variabletensor = t.FloatTensor([[1,2],[3,4]])variable = Variable(tensor,requires_grad = True) # re...原创 2019-04-02 19:39:47 · 131 阅读 · 0 评论 -
4.4 nn.functional
1. nn.Module 和 nn.functional 不同之处import torch as tfrom torch.autograd import Variable as Vfrom torch import nninput = V(t.randn(2,3))model = nn.Linear(3,4)output1 = model(input)output2 = nn.f...原创 2019-05-10 16:20:15 · 158 阅读 · 0 评论