吴恩达Course 1 - 神经网络和深度学习 - 第二周作业 - 具有神经网络思维的Logistic回归

import numpy as np
import matplotlib.pyplot as plt
import h5py

载入数据集函数:载入数据集;返回train_x, train_t, test_x, test_y, ;

def load_dataset():
    train_set = h5py.File("/home/yan/下载/assignment/datasets/train_catvnoncat.h5", 'r')
    train_x = np.array(train_set['train_set_x'][:])
    train_y = np.array(train_set['train_set_y'][:])

    test_set = h5py.File("/home/yan/下载/assignment/datasets/test_catvnoncat.h5", 'r')
    test_x = np.array(train_set['train_set_x'][:])
    test_y = np.array(train_set['train_set_y'][:])
    
    return train_x, train_y, test_x, test_y

初始化函数:特征降维;特征缩放;参数随机;返回train_x, train_y, w, b, test_x, test_y;

def init():
    train_x, train_y, test_x, test_y = load_dataset()
    
    train_x = train_x.reshape(train_x.shape[0], -1).T / 255
    train_y = train_y.reshape(1, -1)
    test_x = test_x.reshape(test_x.shape[0], -1).T / 255
    test_y = test_y.reshape(1, -1)
    
    return train_x, train_y, test_x, test_y;

Sigmoid()

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

预测函数:预测值高于0.5视为真,否则视为伪;与标签比较返回精确度

def predict(a, y):
    for ai in a[0]:
        assert(ai.dtype == float)
        ai = 1 if ai > 0.5 else 0
    return (1 - np.sum(np.abs(a - y) / y.shape[1]) * 100

梯度下降:正反向传播获取梯度;参数更新;每50次迭代计算代价;训练结束计算精度; 返回代价

def gradient_decent(x, y, learning_rate, iteration_times, test_x, test_y):
    costs = []
    w = np.random.randn(x.shape[0], 1) * 0.001
    b = np.random.randn(1, 1).squeeze() * 0.001
    
    for i in range(iteration_times):
        a = sigmoid(w.T @ x + b)
        dw = x @ (a - y).T / y.shape[1]
        db = np.sum(a - y) / y.shape[1]
        if i % 50 == 0:
            cost = (y @ np.log(a).T + (1 - y) @ np.log(1 - a).T).squeeze() * -1 / y.shape[1]
            print("Iteration times: ", i, "  cost: ", cost)
            costs.append(cost)
        w = w - learning_rate * dw
        b = b - learning_rate * db
        
    model = {"w":w,
             "b":b,
             "costs":costs,
             "train_accuracy":predict(sigmoid(w.T @ x + b), y),
             "test_accuracy":predict(sigmoid(w.T @ test_x + b), test_y)
    }
    
    return model

主函数

train_x, train_y, test_x, test_y = init()
learning_rates = [0.001, 0.005, 0.01, 0.03, 0.05]
colors = ['r', 'g', 'b', 'y', 'c']
iteration_times = 5000

fig = plt.figure()
ax = fig.add_subplot(111)
ax.axis([0, 5001, 0, 1])
ax.set_xticks(np.arange(0, 5001, 500))
ax.set_yticks(np.arange(0, 1.1, 0.1))
ax.set_xlabel("x = Iteration times")
ax.set_ylabel("y = Cost")

models = {}

for i in range(5):
    models[str(learning_rates[i])] = gradient_decent(train_x, train_y, learning_rates[i], iteration_times, test_x, test_y) 
    models[str(learning_rates[i])]['color'] = colors[i]
    
for a in learning_rates:
    ax.plot(np.arange(0, 5000, 50), models[str(a)]["costs"], label = "Learning rate = " + str(a),
            color = models[str(a)]["color"])
    print("Learning rate: " + str(a) + "\nIteration times: 5000\n" +
          "Training set's accuracy: " + str(round(models[str(a)]['train_accuracy'], 2)) + "%"
         "\nTest set's accuracy: " + str(round(models[str(a)]['test_accuracy'], 2)) + "%"
         "\n*-----------------------------------------------*")
    
ax.legend()
plt.show()

部分执行过程:

Iteration times:  0   cost:  0.7077431474573537
Iteration times:  50   cost:  0.6171643600717469
Iteration times:  100   cost:  0.5920356684252118
Iteration times:  150   cost:  0.5725404511562315
Iteration times:  200   cost:  0.5562126286415825
Iteration times:  250   cost:  0.5419715055176667
Iteration times:  300   cost:  0.5292445257992887
Iteration times:  350   cost:  0.5176834877418875
Iteration times:  400   cost:  0.5070565223952678
Iteration times:  450   cost:  0.4971997080406956
Iteration times:  500   cost:  0.48799217382191296
Iteration times:  550   cost:  0.4793418871987588
Iteration times:  600   cost:  0.4711769278092411
Iteration times:  650   cost:  0.46343984899835206
Iteration times:  700   cost:  0.45608389447580183
Iteration times:  750   cost:  0.44907038307031616
Iteration times:  800   cost:  0.44236685533177955
Iteration times:  850   cost:  0.4359457311741904
Iteration times:  900   cost:  0.4297833185355472
Iteration times:  950   cost:  0.42385906820412445
Iteration times:  1000   cost:  0.4181550045437344
Iteration times:  1050   cost:  0.41265528408144136
Iteration times:  1100   cost:  0.4073458485235077

执行结果:

Learning rate: 0.001
Iteration times: 5000
Training set's accuracy: 82.02%
Test set's accuracy: 82.02%
*-----------------------------------------------*
Learning rate: 0.005
Iteration times: 5000
Training set's accuracy: 94.15%
Test set's accuracy: 94.15%
*-----------------------------------------------*
Learning rate: 0.01
Iteration times: 5000
Training set's accuracy: 97.02%
Test set's accuracy: 97.02%
*-----------------------------------------------*
Learning rate: 0.03
Iteration times: 5000
Training set's accuracy: 99.38%
Test set's accuracy: 99.38%
*-----------------------------------------------*
Learning rate: 0.05
Iteration times: 5000
Training set's accuracy: 99.74%
Test set's accuracy: 99.74%
*-----------------------------------------------*

可见,设置学习率0.008左右,迭代次数8000左右,会训练出预测准确率较高的模型

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值