深度学习 两层隐藏层网络实现猫狗分类

  • 加载数据集,导入相应的包


  • import numpy as np
    import torch
    from torch import nn
    import h5py
    from torch.nn import init
    
    
    def load_data():
        train_dataset = h5py.File('E:\文档\【吴恩达课后编程作业】第二周作业 - Logistic回归-识别猫的图片资源/train_catvnoncat.h5', "r")
        train_set_x_orig = np.array(train_dataset["train_set_x"][:])  # your train set features
        train_set_y_orig = np.array(train_dataset["train_set_y"][:])  # your train set labels
    
        test_dataset = h5py.File('E:\文档\【吴恩达课后编程作业】第二周作业 - Logistic回归-识别猫的图片资源/test_catvnoncat.h5', "r")
        test_set_x_orig = np.array(test_dataset["test_set_x"][:])  # your test set features
        test_set_y_orig = np.array(test_dataset["test_set_y"][:])  # your test set labels
    
        classes = np.array(test_dataset["list_classes"][:])  # the list of classes
    
        train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
        test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
    
        return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes;
  • 数据预处理,将RGB三通道转化为一维数据,进行归一化


  • train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes=load_data()
    train_flatten=train_set_x_orig.reshape(train_set_x_orig.shape[0],-1)
    train_set_y= train_set_y_orig.reshape( train_set_y_orig.shape[0],-1).T
    print(train_flatten.shape)
    print(train_set_y.shape)
    test_flatten=test_set_x_orig.reshape(test_set_x_orig.shape[0],-1)
    print(test_flatten.shape)
    train_x=train_flatten/255
    test_x=test_flatten/255
    X = torch.from_numpy(train_x).float()
    Y= torch.from_numpy(train_set_y).float()

  • 定义网络


  • 
    class SingleNet(nn.Module):
        def __init__(self, input_size, hidden_size,hid1, output_size):
            super(SingleNet, self).__init__()
            self.hidden1 = nn.Linear(input_size, hidden_size)
            self.tanh = nn.Tanh()
            self.hidden2=nn.Linear(hidden_size,hid1);
            self.ReLU=nn.ReLU()
            self.output = nn.Linear(hid1, output_size)
            self.sigmoid = nn.Sigmoid()
    
        def forward(self, x):
            x = self.hidden1(x)
            x = self.tanh(x)
            x=self.hidden2(x)
            x=self.ReLU(x)
            x = self.output(x)
            x = self.sigmoid(x)
            return x

  • 定义损失函数,优化器

  • input_size,hidden_size,hid1,output_size = 12288,500,50, 1;
    net = SingleNet(input_size,hidden_size,hid1,output_size);
    """init.normal_(net.hidden1.weight, mean=0, std=0.01)
    init.normal_(net.hidden2.weight, mean=0, std=0.01)
    init.normal_(net.output.weight, mean=0, std=0.01)
    init.constant_(net.hidden1.bias, val=0)
    init.constant_(net.hidden2.bias, val=0)
    init.constant_(net.output.bias, val=0)"""
    #定义损失函数(二分类交叉熵)
    cost = nn.BCELoss();

  • 训练

    def evaluate_accuracy(x,y,net):
        out = net(x)
        correct= (out.ge(0.5) == y).sum().item()
        n = y.shape[0]
        return correct / n
    #定义优化器(随机梯度下降)
    optimizer = torch.optim.SGD(net.parameters(), lr=0.0007, momentum=0.9);
    def train(net,train_x,train_y,cost):
        num_epochs = 1000;
        for epoch in range(num_epochs):
            out = net(train_x)
            l = cost(out, train_y)
            optimizer.zero_grad()
            l.backward()
            optimizer.step()
            train_loss = l.item()
    
            if (epoch + 1) % 100 == 0:
                train_acc = evaluate_accuracy(train_x, train_y, net)
                print('epoch %d ,loss %.4f' % (epoch + 1, train_loss) + ', train acc {:.2f}%'
                      .format(train_acc * 100))
    train(net,X, Y, cost)

    最后的结果:

  •  写在最后,这两个文件需要自己下载,网上有资源的。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值