MXNet学习7——Logistic Regression

概要

之前介绍了Linear Regression,其实也是官方教程的代码,这里对照着原有的线性回归实现 Logistic Regression。

正文

import numpy as np
import mxnet as mx
import random

机器学习实战 逻辑回归部分代码在 mxnet 上的实现
其中 LogisticRegressionOutput 部分并不确定是如何实现的

梯度下降,随机梯度下降等并没有进行区分,随机采样也不确定是否已经实现 梯度下降速率貌似也是固定的

#Training data
train_data = []; train_label = []
#Evaluation Data
eval_data = []; eval_label = []
fr = open('horseColicTraining.txt')
for line in fr.readlines():
    lineArr = line.strip().split()
    train_line = []
    for i in range(21):
        train_line.append([float(lineArr[i])])
    train_data.append(train_line)
    train_label.append(float(lineArr[21]))
fr.close()
fr = open('horseColicTest.txt')
for line in fr.readlines():
    lineArr = line.strip().split()
    eval_line = []
    for i in range(21):
        eval_line.append([float(lineArr[i])])
    eval_data.append(eval_line)
    eval_label.append(float(lineArr[21]))
fr.close()

上述数据是机器学习实战一书给的数据

batch_size = 1

eval_data = np.array(eval_data)
eval_label = np.array(eval_label)
train_data = np.array(train_data)
train_label = np.array(train_label)

train_iter = mx.io.NDArrayIter(train_data,train_label, batch_size, shuffle=True,label_name='log_reg_label')
eval_iter = mx.io.NDArrayIter(eval_data, eval_label, batch_size, shuffle=False)

X = mx.sym.Variable('data')
Y = mx.symbol.Variable('log_reg_label')

fully_connected_layer  = mx.sym.FullyConnected(data=X, name='fc1', num_hidden = 1)
lro = mx.sym.LogisticRegressionOutput(data=fully_connected_layer, label=Y, grad_scale=0.1 ,name="lro")

model = mx.mod.Module(
    symbol = lro ,
    data_names=['data'], 
    label_names = ['log_reg_label']# network structure    
)

# learning_rage momentum 的选取并不确定

model.fit(train_iter, eval_iter,
            optimizer_params={'learning_rate':0.01, 'momentum': 0.9},
            num_epoch=1000,
            batch_end_callback = mx.callback.Speedometer(batch_size, 2))

#Inference
model.predict(eval_iter).asnumpy()

array([[ 1.00000000e+00],
[ 1.00000000e+00],
[ 1.07626455e-18],
[ 2.43250756e-17],
[ 3.17663907e-05],
[ 1.00000000e+00],
[ 3.23768938e-03],
[ 1.00000000e+00],
[ 1.00000000e+00],
[ 1.16790067e-09],
[ 9.49854195e-01],
[ 0.00000000e+00],
[ 1.75752318e-19],
[ 5.99031241e-12],
[ 1.00000000e+00],
[ 9.99998093e-01],
[ 9.99995470e-01],
[ 8.78181338e-24],
[ 0.00000000e+00],
[ 1.28962585e-12],
[ 9.14914488e-24],
[ 3.59429606e-08],
[ 3.31659359e-19],
[ 9.73756745e-34],
[ 2.55922098e-02],
[ 9.62432384e-01],
[ 9.99997497e-01],
[ 3.41130624e-10],
[ 1.00000000e+00],
[ 2.79457331e-01],
[ 1.00000000e+00],
[ 2.28396157e-32],
[ 6.65185758e-25],
[ 7.65202936e-27],
[ 3.74540492e-23],
[ 5.13624229e-07],
[ 1.13058608e-11],
[ 1.00000000e+00],
[ 9.99996662e-01],
[ 9.99999642e-01],
[ 1.00000000e+00],
[ 1.00000000e+00],
[ 9.30115818e-26],
[ 3.03375537e-17],
[ 5.16874559e-18],
[ 1.02254209e-25],
[ 9.65888936e-13],
[ 2.74748024e-09],
[ 9.99820411e-01],
[ 2.89250483e-29],
[ 9.69527386e-27],
[ 1.13635138e-02],
[ 2.27366835e-02],
[ 7.95625806e-01],
[ 6.94157720e-01],
[ 1.00000000e+00],
[ 1.00000000e+00],
[ 9.78604619e-20],
[ 3.82486790e-19],
[ 3.38439264e-28],
[ 1.00000000e+00],
[ 9.69527386e-27],
[ 5.10404334e-06],
[ 1.00000000e+00],
[ 3.51304506e-11],
[ 5.41616619e-01],
[ 1.02707554e-09]], dtype=float32)

#Evaluation
metric = mx.metric.MSE()
model.score(eval_iter, metric)
print(metric.get())

(‘mse’, 0.31062849529015285)

逻辑回归和线性回归的实现非常类似,所以就不过多介绍了

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值