Tensorflow实现逻辑回归

1. 代码实现:

import numpy as np;
import matplotlib.pyplot as plt;
import tensorflow as tf;
DATA_FILE1 = "./data1.txt";
class LogicRegression:
    def sigmoid(self,z):
        return 1/(1+np.exp(-z));
    def __init__(self):
        data = np.loadtxt(DATA_FILE1, delimiter=",");
        self.x = data[:,0:2].astype(np.float32);
        self.y = data[:,2].astype(np.float32);
        self.pos = np.where(self.y==1);
        self.neg = np.where(self.y==0);
        # rememeber reshape to 1 dimension
        self.x = np.reshape(self.x,newshape=(len(self.y),2));
        self.y = np.reshape(self.y,newshape=(len(self.y),1));
        
    def train(self):
        x = tf.placeholder(tf.float32);
        y = tf.placeholder(tf.float32);
        #w = tf.Variable(np.random.randn(2,1).astype(np.float32));
        #here we need to understand why initialize with zeros
        w = tf.Variable(tf.zeros([2,1]));
        b = tf.Variable(np.random.rand());

        z = tf.matmul(x,w) + b;
        h = (y * tf.log(tf.sigmoid(z)) + (1-y)* tf.log(1-tf.sigmoid(z))) * -1;
        loss = tf.reduce_mean(h);

        optimizer = tf.train.GradientDescentOptimizer(0.002).minimize(loss);
        init = tf.global_variables_initializer();
        
        with tf.Session() as sess:
            sess.run(init);
            for i in range(1000000):
                
                feed = {x:self.x,y:self.y};
                sess.run(optimizer,feed_dict = feed);
                #'''
                if (i % 50000 == 0):
                    #w = sess.run(w);
                    print (sess.run(w).flatten(),sess.run(b).flatten());
                    print ("loss:",sess.run(loss,{x:self.x,y:self.y}));
            self.w = sess.run(w).flatten();
            self.b = sess.run(b).flatten();
            #'''
    def show(self):
        plt.figure();
        plt.scatter(self.x[self.pos,0],self.x[self.pos,1],c='g',marker='o');
        plt.scatter(self.x[self.neg,0],self.x[self.neg,1],c='r',marker='x')
        x = np.linspace(0, 100, 1000)
        y = []
        for i in x:
            y.append((i * -self.w[0] - self.b) / self.w[1])
            
        plt.plot(x, y)
        plt.show();
if __name__ == "__main__":
    LR = LogicRegression();
    LR.train();
    LR.show();

2. 模型介绍:


要使用Tensorflow实现逻辑回归,可以按照以下步骤进行操作。首先,导入所需的包和库,包括Tensorflow和NumPy等。然后,准备数据集,包括输入特征和目标变量。接下来,定义模型的参数,如权重和偏差,并设置模型的超参数,如学习率和迭代次数。然后,定义模型的结构,包括输入占位符、权重和偏差的变量和模型的输出。使用逻辑回归的损失函数和优化算法来训练模型,并在训练过程中对模型进行评估。最后,使用训练好的模型来进行预测。具体的代码和实现细节可以参考引用的资料和。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [【TensorflowTensorflow实现线性回归及逻辑回归](https://blog.csdn.net/Daycym/article/details/89979772)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *2* [tensorflow实现逻辑回归模型](https://download.csdn.net/download/weixin_38707862/12866985)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值