机器学习 - 选择模型

接着这一篇博客做进一步说明:
机器学习 - 准备数据

PyTorch moduleExplain
torch.nnContains all of the building blocks for computational graphs (essentially a series of computations executed in a particular way). nn 模块为用户提供了丰富的神经网络组件,包括各种层,激活函数,损失函数以及其他辅助功能。
torch.nn.ParameterStores tensors that can be used with nn.Module. If requires_grad=True gradients (used for updating model parameters via gradient descent) are calculated automatically, this is often referred to as “autograd”. 通常在定义神经网络模型时用于表示权重 (weights) 和 偏置 (biases) 等参数
torch.nn.ModuleThe base class for all neural network modules, all the building blocks for neural networks are subclasses. If you’re building a neural network in PyTorch, your models should subclass nn.Module. Requires a forward() method be implemented
torch.optimContains various optimization algorithms (these tell the model parameters stored in nn.Parameter how to best change to improve gradient descent and in turn reduce the loss).
def forward()All nn.Module subclasses require a forward() method, this defines the computation that will take place on the data passed to the particular nn.Module (e.g. the linear regression formula above).

可以这么理解,almost everything in a PyTorch neural network comes from torch.nn .

  • nn.Module contains the larger building blocks (layers)
  • nn.Parameter contains the smaller parameters like weights and biases (put these together to make nn.Module )
  • forward() tells the larger blocks how to make calculations on inputs (tensors full of data) within nn.Module(s)
  • torch.optim contains optimization methods on how to improve the parameters within nn.Parameter to better represent input data.

大概可以这么理解:module 里包含各种参数 (parameter),在 module 里做计算 (forward) 甚至可以通过修改参数来优化 (torch.optim)。

这里稍微介绍 Neural Network Block。
Neural Network Block 通常指的是神经网络中的一个模块化组件,它可以包含一个或多个层 (layers) 以及一些额外的操作,被设计用来完成特定的功能或实现特定的神经网络结构。
Neural Network Block的设计旨在简化神经网络模型的构建和管理,提高代码的可读性和可维护性。通过将神经网络模型划分为多个块,可以将模型的不同部分进行分离,使得每个部分都可以独立地设计,调整和复用。这种模块化的设计使得构建复杂的神经网络变得更加灵活和高效。
比如:卷积神经网络中的卷积块。

代码如下所示

import torch 

class LinearRegressionModel(nn.Module):  # child class nn.Module
  def __init__(self):
    super().__init__()

    # Initialize model parameters
    self.weights = nn.Parameter(torch.randn(1,
                                            dtype=torch.float),
                                requires_grad = True)
    self.bias = nn.Parameter(torch.randn(1,
                                         dtype=torch.float),
                             requires_grad = True)  # requires_grad=True means PyTorch will track the gradients of this specific parameter for use with torch.autograd and gradient descent (for many torch.nn modules, requires_grad=True is set by default)

  # Any child class of nn.Module needs to override forward()
  # This defines the forward computation of the model
  def forward(self, x: torch.Tensor) -> torch.tensor:
    return self.weights * x + self.bias

# Set manual seed since nn.Parameter are randomly initizalized
torch.manual_seed(42)

# Create an instance of the model (this is a subclass of nn.Module that contains nn.Parameter(s))
model_0 = LinearRegressionModel()

# Check the nn.Parameter(s) within the nn.Module subclass
print(f"Check the nn.Parameter(s): {list(model_0.parameters())}")

# List named parameters
print(f"List named parameters: {model_0.state_dict()}")

# 输出结果如下
Check the nn.Parameter(s): [Parameter containing:
tensor([0.3367], requires_grad=True), Parameter containing:
tensor([0.1288], requires_grad=True)]
List named parameters: OrderedDict([('weights', tensor([0.3367])), ('bias', tensor([0.1288]))])


使用 torch.inference_mode() 来做预测。
The data is passed to our model. It will go through the model’s forward() method and produce a result using the computation.

# Make predictions with model
with torch.inference_mode():
  y_test_preds = model_0(X_test)

As the name suggests, torch.inference_mode() is used when using a model for inference (making predictions). torch.inference_mode() turns off a bunch of things (like gradient tracking, which is necessary for training but not for inference) to make forward-passes (data going through the forward() method) faster.

# Check the predictions
print(f"Number of testing samples: {len(X_test)}")
print(f"Number of predictions made: {len(y_test_preds)}")
print(f"Predicted values (X_test):\n {y_test_preds}")

def plot_predictions(train_data = X_train,
                     train_labels = y_train,
                     test_data = X_test,
                     test_labels = y_test,
                     predictions = None):
  """
  Plots training data, test data and compares predictions
  """
  plt.figure(figsize=(10, 7))

  # Plot training data in blue
  plt.scatter(train_data, train_labels, c="b", s=4, label="Training data")

  # Plot test data in green
  plt.scatter(test_data, test_labels, c="g", s=4, label="Test data")

  if predictions is not None:
    plt.scatter(test_data, predictions, c="r", s=4, label="Predictions")

  plt.legend(prop={"size": 14})

plot_predictions(predictions=y_test_preds)

print(f"check the difference:\n {y_test - y_test_preds}")  # 可以发现两者之间的差距是很大的

# 结果如下
Number of testing samples: 10
Number of predictions made: 10
Predicted values (X_test):
 tensor([[0.3982],
        [0.4049],
        [0.4116],
        [0.4184],
        [0.4251],
        [0.4318],
        [0.4386],
        [0.4453],
        [0.4520],
        [0.4588]])
check the difference:
 tensor([[0.4618],
        [0.4691],
        [0.4764],
        [0.4836],
        [0.4909],
        [0.4982],
        [0.5054],
        [0.5127],
        [0.5200],
        [0.5272]])

将数据显示到图里
效果图

看到这了,给个赞呗~

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值