感知机算法实现

感知机算法实现

公式

y p r e d = s i g n ( w T ∗ x + b ) y_{pred}=sign(w^T*x+b) ypred=sign(wTx+b)

l o s s = ∑ t i ≠ s i g n ( w T x i + b ) − t i ( w T x i + b ) loss = \sum_{t_i\ne sign(w^Tx_i+b)}-t_i(w^Tx_i+b) loss=ti=sign(wTxi+b)ti(wTxi+b)

( 其中, t i 为真实标签, ( w T x i + b ) 为预测值,损失为分类错误的点的预测值乘以其真实标签的累加取负值 ) (其中,t_i为真实标签,(w^Tx_i+b)为预测值,损失为分类错误的点的预测值乘以其真实标签的累加取负值) (其中,ti为真实标签,(wTxi+b)为预测值,损失为分类错误的点的预测值乘以其真实标签的累加取负值)

参数更新: w n e x = w o l d + α ∗ ∑ t i ≠ s i g n ( w T x i + b ) t i x 参数更新:w^{nex}=w^{old}+\alpha*\sum_{t_i\ne sign(w^Tx_i+b)}t_ix 参数更新:wnex=wold+αti=sign(wTxi+b)tix

b n e w = b o l d + ∑ t i ≠ s i g n ( w T x i + b ) α t i b^{new}=b^{old}+\sum_{t_i\ne sign(w^Tx_i+b)}\alpha t_i bnew=bold+ti=sign(wTxi+b)αti

数据

x_i(1,2)(2,3)(3,3)(2,1)(3,2)
y_i111-1-1

代码实现

1. 使用循环

import numpy as np
input_feature = 2
lr = 0.01
data = [[1, 2],
        [2, 3],
        [3, 3],
        [2, 1],
        [3, 2]]
labels = [1, 1, 1, -1, -1]
class Model():
    def __init__(self, input_feature):
        self.w = np.ones(input_feature)
        self.b = np.array([0])
        
    def forward_backward(self, data, labels, lr):
        loss = 0
        for x, label in zip(data, labels):
            x = np.array(x)
            a = np.dot(self.w.T, x)+self.b
            output = np.sign(a)
            if output != label:
                self.w = self.w + lr*label*x
                self.b = self.b + lr*label
                loss += -label*(a)
        return loss
    
    def predict(self, x):
        a = np.dot(self.w, x) + b
        result = np.sign(a)
        return result
    

net = Model(input_feature)


for epoch in range(100):
    loss = net.forward_backward(data, labels, lr)
    print(loss)

2. 向量化思想

import numpy as np
input_feature = 2

class Model():
    def __init__(self, input_feature):
        self.w = np.array([1, -1])
        self.b = 0
        
    def forward_backward(self, x, labels, lr):
        a = np.dot(self.w, x)
        output = np.sign(a)
        flag =(output != labels)
        loss = -np.dot(flag*labels, output)
        self.w = self.w + lr*np.dot(x*flag, labels)
        self.b = self.b + lr*(flag*labels)
        return loss
    
    def predict(self, x):
        return np.dot(self.w, x)
    

net = Model(input_feature)
data = [
    [1, 2, 3, 2, 3],
    [2, 3, 3, 1, 2]
]
data = np.array(data)
labels = np.array([1, 1, 1, -1, -1])
lr = 0.5
for epoch in range(100):
    loss = net.forward_backward(data, labels, lr)
    print(loss)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值