2--多层感知机

2.1感知机

        感知机由两层的神经元组成,即输入层和输出层。对于感知机模型来说,在实际运用中具有较大的局限性,如不能进行异或运算。为了解决非线性的可分问题,考虑在网络中加入一个或多个隐藏层得到一个多层感知机。

        多层感知机在输出层和输入层之间增加一个或多个全连接隐藏层,并通过激活函数转换隐藏层的输出一个单隐藏层的多层感知机如下图所示。

        激活函数(activation function)通过计算加权和并加上偏置来确定神经元是否应该被激活, 它们将输入信号转换为输出的可微运算。大多数激活函数都是非线性的。

ReLu函数:

 \operatorname{ReLU}(x) = \max(x, 0).

 sigmoid函数:

\operatorname{sigmoid}(x) = \frac{1}{1 + \exp(-x)}.

tanh函数:

\operatorname{tanh}(x) = \frac{1 - \exp(-2x)}{1 + \exp(-2x)}.

2.2 代码实现1

!pip install git+https://github.com/d2l-ai/d2l-zh@release  # installing d2l
!pip install matplotlib==3.0.0

import torch
from torch import nn
from d2l import torch as d2l

batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
#初始化
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = nn.Parameter(torch.randn(num_inputs, num_hiddens, requires_grad=True)*0.01)
b1 = nn.Parameter(torch.zeros(num_hiddens, requires_grad=True))
W2 = nn.Parameter(torch.randn(num_hiddens, num_outputs, requires_grad=True)*0.01)
b2 = nn.Parameter(torch.zeros(num_outputs, requires_grad=True))

params = [W1,b1,W2,b2]

def relu(X):
  a = torch.zeros_like(X)
  return torch.max(X,a)

def net(X):
  X = X.reshape((-1,num_inputs))
  H = relu(X@W1 + b1)# @为矩阵乘法
  return H@W2 + b2

loss = nn.CrossEntropyLoss(reduction='none')

#由于多层感知机与softmax训练过程完全相同 这里直接使用softmax的训练函数
num_epochs, lr = 10, 0.1
updater = torch.optim.SGD(params, lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)

运行结果:

 ​​​​

2.3 代码实现2

!pip install git+https://github.com/d2l-ai/d2l-zh@release  # installing d2l
!pip install matplotlib==3.0.0

import torch
from torch import nn
from d2l import torch as d2l

net = nn.Sequential(nn.Flatten(),nn.Linear(784, 256),nn.ReLU(),nn.Linear(256, 10))

def init_weights(m):
    if type(m) == nn.Linear:
        nn.init.normal_(m.weight, std=0.01)

net.apply(init_weights);

batch_size, lr, num_epochs = 256, 0.1, 10
loss = nn.CrossEntropyLoss(reduction='none')
trainer = torch.optim.SGD(net.parameters(), lr=lr)

train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)

运行结果:

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用Python实现Fashion-MNIST数据集的多层感知机的示例代码[^1][^2]: ```python import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms # 定义多层感知机模型 class MLP(nn.Module): def __init__(self): super(MLP, self).__init__() self.fc1 = nn.Linear(28*28, 512) self.fc2 = nn.Linear(512, 9) def forward(self, x): x = x.view(-1, 28*28) x = torch.relu(self.fc1(x)) x = self.fc2(x) return x # 加载Fashion-MNIST数据集 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) train_dataset = datasets.FashionMNIST(root='./data', train=True, transform=transform, download=True) test_dataset = datasets.FashionMNIST(root='./data', train=False, transform=transform) # 创建数据加载器 train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=False) # 初始化模型、损失函数和优化器 model = MLP() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) # 训练模型 num_epochs = 10 for epoch in range(num_epochs): for images, labels in train_loader: optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 在测试集上评估模型 model.eval() correct = 0 total = 0 with torch.no_grad(): for images, labels in test_loader: outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = correct / total print('Accuracy: {:.2f}%'.format(accuracy * 100)) ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值