多层感知机(Multilayer Perceptron, MLP)

相比于感知机,多层感知机多了一个或多个隐藏层,克服了感知器不能对线性不可分数据进行识别的弱点。

多层感知机代码:

import os
import torch
from torch import nn
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader
from torchvision import transforms
class MLP(nn.Module):
    '''
    Multilayer Perceptron
    '''
    def __init__(self):
        #super()用来调用父类的方法,__init__()是类的构造方法。
        #在创建类时,即使你不主动写__init__()这个函数,系统会“自动执行”;
        #你也可以写一个,让你的类在创建时完成一些“动作”。
        #如果子类B和父类A,都写了init方法,
        #那么A的init方法就会被B覆盖。想调用A的init方法需要用super去调用。
        super().__init__()
        
        #Flatten converts the 3D image representations (width, height and channels) into 1D format.
        #Sequential 允许我们构建序列化的模块。就把Sequential当作list来看,
        #nn.sequential(), 一个有序的容器,神经网络模块将按照在传入构造器的顺序依次被添加到计算图中执行。
        #nn.Linear用来构建全连接层;nn.conv2d卷积层;nn.TransposeConv逆卷积。
        #nn.Linear(in_features(输入的神经元个数), out_features(输出神经元个数),bias=True(是否包含偏置))
        #下面的网络有三层,第一层的目标是从32*32*3个神经元变成64个,第二层从64变成32,第三层从32变成10,
        #前面两层完了后都用了ReLU激活函数,第三层没有用激活函数直接输出了。
        self.layers = nn.Sequential(
        nn.Flatten(),
        nn.Linear(32 * 32 * 3, 64),
        nn.ReLU(),
        nn.Linear(64, 32),
        nn.ReLU(),
        nn.Linear(32, 10)
        )
    
    #在使用Pytorch的时候,模型训练时,不需要调用forward函数,只需要在实
    #例化一个对象中传入对应的参数就可以自动调用forward函数。
    def forward(self, x):
        '''Forward pass'''
        return self.layers(x)
    
if __name__ == '__main__':

    #Set fixed random number seed
    torch.manual_seed(42)

    #Prepare CIFAR-10 dataset
    dataset = CIFAR10(os.getcwd(), download=True, transform=transforms.ToTensor())
    trainloader = torch.utils.data.DataLoader(dataset, batch_size=10, shuffle=True, num_workers=1)

    # Initialize the MLP
    mlp = MLP()

    # Define the loss function and optimizer
    loss_function = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(mlp.parameters(), lr=1e-4)

    # Run the training loop
    for epoch in range(0, 5):
        # Print epoch
        print(f'Starting epoch {epoch+1}')

    # Set current loss value
    current_loss = 0.0

    # Iterate over the DataLoader for training data
    #Step-by-step, these are the things that happen within the loop:

    #1,Of course, we have a number of full iterations - also known as epochs. Here, we use 5 epochs, as defined by the range(0, 5).
    #2,We set the current loss value for printing to 0.0.
    #3,Per epoch, we iterate over the training dataset - and more specifically, the minibatches within this training dataset as specified by the 
    #   batch size (set in the trainloader above). Here, we do the following things:
        #3.1,We decompose the data into inputs and targets (or x and y values, respectively).
        #3.2,We zero the gradients in the optimizer, to ensure that it starts freshly for this minibatch.
        #3.3,We perform the forward pass - which in effect is feeding the inputs to the model, which, recall, was initialized as mlp.
        #3.4,We then compute the loss value based on the outputs of the model and the ground truth, available in targets.
        #3.5,This is followed by the backward pass, where the gradients are computed, and optimization, where the model is adapted.
        #3.6,Finally, we print some statistics - but only at every 500th minibatch. At the end of the entire process, 
        #     we print that the training process has finished.

    for i, data in enumerate(trainloader, 0):
        # Get inputs
        inputs, targets = data

        # Zero the gradients
        optimizer.zero_grad()

        # Perform forward pass
        outputs = mlp(inputs)

        # Compute loss
        loss = loss_function(outputs, targets)

        # Perform backward pass
        loss.backward()

        # Perform optimization
        optimizer.step()

        # Print statistics
        current_loss += loss.item()
        if i % 500 == 499:
            print('Loss after mini-batch %5d: %.3f' %
                  (i + 1, current_loss / 500))
            current_loss = 0.0

# Process is complete.
print('Training process has finished.')
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to /Users/lkq/Downloads/test1/cifar-10-python.tar.gz



0it [00:00, ?it/s]


Extracting /Users/lkq/Downloads/test1/cifar-10-python.tar.gz to /Users/lkq/Downloads/test1
Starting epoch 1
Starting epoch 2
Starting epoch 3
Starting epoch 4
Starting epoch 5
Loss after mini-batch   500: 2.250
Loss after mini-batch  1000: 2.107
Loss after mini-batch  1500: 2.017
Loss after mini-batch  2000: 1.977
Loss after mini-batch  2500: 1.944
Loss after mini-batch  3000: 1.909
Loss after mini-batch  3500: 1.889
Loss after mini-batch  4000: 1.882
Loss after mini-batch  4500: 1.860
Loss after mini-batch  5000: 1.856
Training process has finished.


多层感知器(Multilayer PerceptronMLP)是一种常用的神经网络模型,可以用来解决分类和回归问题。它由输入层、隐藏层和输出层组成,每一层都由多个神经元组成,相邻层之间的神经元之间有连接权重。 使用Python实现多层感知器模型的方法如下: 1. 导入所需的库:首先需要导入NumPy库用于数值计算,以及scikit-learn库用于数据预处理。 ```python import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split ``` 2. 准备数据:将原始数据集划分为训练集和测试集,并进行特征缩放。 ```python X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) ``` 3. 初始化权重和偏置:定义一个随机初始化权重和偏置的函数。 ```python def initialize_parameters(layer_dims): parameters = {} for l in range(1, len(layer_dims)): parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01 parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) return parameters parameters = initialize_parameters(layer_dims) ``` 4. 前向传播:定义前向传播函数,计算神经网络的输出。 ```python def forward_propagation(X, parameters): A = X caches = [] for l in range(1, L): Z = np.dot(parameters['W' + str(l)], A) + parameters['b' + str(l)] A = relu(Z) cache = (Z, A) caches.append(cache) ZL = np.dot(parameters['W' + str(L)], A) + parameters['b' + str(L)] AL = sigmoid(ZL) return AL, caches AL, caches = forward_propagation(X_train, parameters) ``` 5. 计算损失:根据神经网络的输出和真实标签计算损失函数。 ```python def compute_cost(AL, Y): m = Y.shape[1] cost = (-1/m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1-Y, np.log(1-AL))) return cost cost = compute_cost(AL, y_train) ``` 6. 反向传播:定义反向传播函数,计算梯度并更新参数。 ```python def backward_propagation(AL, Y, caches): grads = {} dZL = AL - Y dW = (1/m) * np.dot(dZL, A_prev.T) db = (1/m) * np.sum(dZL, axis=1, keepdims=True) dA_prev = np.dot(W.T, dZ) grads['dW'] = dW grads['db'] = db return grads grads = backward_propagation(AL, y_train, caches) ``` 7. 参数更新:根据梯度和学习率更新参数。 ```python def update_parameters(parameters, grads, learning_rate): for l in range(1, L): parameters['W' + str(l)] -= learning_rate * grads['dW' + str(l)] parameters['b' + str(l)] -= learning_rate * grads['db' + str(l)] return parameters parameters = update_parameters(parameters, grads, learning_rate) ``` 8. 模型训练:将上述步骤整合到一个函数中,循环迭代多次进行模型训练。 ```python def model(X, Y, learning_rate, num_iterations): parameters = initialize_parameters(layer_dims) for i in range(num_iterations): AL, caches = forward_propagation(X, parameters) cost = compute_cost(AL, Y) grads = backward_propagation(AL, Y, caches) parameters = update_parameters(parameters, grads, learning_rate) return parameters parameters = model(X_train, y_train, learning_rate, num_iterations) ``` 以上就是使用Python实现多层感知器(MLP)模型的主要步骤。根据具体数据集和问题,可能需要进行参数调优和模型评估等进一步步骤。在实际应用中,还可以使用其他性能更好的库(如TensorFlow、Keras)来实现多层感知器模型。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值