Softmax回归Python实现

1. 前言

前面讲解了Softmax回归和Logistic回归的理解(博客链接),在本文中将着重对其使用Python脚本实现,并且比较实现几种梯度优化算子,常用的梯度优化算子的讲解请参考我的这篇博客

2. 编码实现

2.1 数据生成

# 产生数据
def get_data(num_data=100):
    x = np.reshape(np.random.normal(1, 1, num_data), (num_data, 1))
    y = np.reshape(np.random.normal(0, 1, num_data), (num_data, 1))
    bias = np.ones((num_data, 1))
    class1 = np.concatenate((x, y, bias), axis=1)
    x = np.reshape(np.random.normal(5, 1, num_data), (num_data, 1))
    y = np.reshape(np.random.normal(6, 1, num_data), (num_data, 1))
    class2 = np.concatenate((x, y, bias), axis=1)

    plt.plot(class1[:, 0], class1[:, 1], 'rs', class2[:, 0], class2[:, 1], 'go')
    plt.grid(True)
    plt.title('Distribution')
    plt.xlabel('X1')
    plt.ylabel('X2')
    plt.show()

    label_data = np.zeros((2*num_data, 1))
    label_data[num_data:2*num_data] = 1.0

    return np.concatenate((class1[:, :], class2[:, :]), axis=0), label_data

这里写图片描述

2.2 梯度下降求解

# 梯度下降法
def grad_ascent(train_data, label_data, num_iter, alpha=0.001):
    weights = np.ones((train_data.shape[1], 1))
    train_data = np.mat(train_data)
    label_data = np.mat(label_data)
    weights_x1 = []
    weights_x2 = []
    weights_bias = []
    for i in np.arange(num_iter):
        temp = sigmoid(train_data*weights)
        error = label_data - temp
        weights = weights + alpha * train_data.T * error
        weights_x1.append(weights[0])
        weights_x2.append(weights[1])
        weights_bias.append(weights[2])

    weights = np.array(weights)
    weights = weights[:, 0]

    # 显示参数变化曲线
    x = np.arange(num_iter)
    weights_x1 = np.array(weights_x1)[:, 0, 0]
    weights_x2 = np.array(weights_x2)[:, 0, 0]
    weights_bias = np.array(weights_bias)[:, 0, 0]
    plt.subplot(311)
    plt.plot(x, weights_x1, 'b-')
    plt.title('weight_x1')
    plt.grid(True)
    plt.subplot(312)
    plt.plot(x, weights_x2, 'b-')
    plt.title('weight_x2')
    plt.grid(True)
    plt.subplot(313)
    plt.plot(x, weights_bias, 'b-')
    plt.title('weight_bias')
    plt.grid(True)
    plt.show()

    return weights

2.3 其它组件

# 画出决策直线
def plot_decision(train_data, data_num, weights):
    x = np.linspace(-2, 9, train_data.shape[0])
    y = np.array((-weights[2]-weights[0]*x)/weights[1])

    plt.plot(train_data[0:data_num, 0], train_data[0:data_num, 1], 'bs',
             train_data[data_num:2*data_num, 0], train_data[data_num:2*data_num, 1], 'go',
             x, y, 'r-')
    plt.grid(True)
    plt.title('line')
    plt.xlabel('X1')
    plt.ylabel('X2')
    plt.show()

if __name__ == '__main__':
    data_num = 100
    train_data, train_label = get_data(data_num)
    # 梯度下降法
    weights = grad_ascent(train_data, train_label, 1000)

    # 随机梯度下降法
    # weights = stoc_grad_ascent(train_data, train_label, 400)

    # 改进的随机梯度下降法
    #weights = stoc_grad_ascent1(train_data, train_label, 800)
    # 画出决策直线
    plot_decision(train_data, data_num, weights)

其它完整代码请访问我的Git地址

3. 结果

这里写图片描述

  • 0
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值