感知器算法对偶形式实现

  • 感知器算法对偶形式学习策略

        详见李航 《统计学习方法》

 

  • 原始形式与对偶形式二者区别

        梯度下降:一次将误分类集合中所有误分类点的梯度下降;----对偶形式

        随机梯度下降:随机选取一个误分类点使其梯度下降;   ----原始形式

 

  • 代码如下:
class dualPerceptron(object):

    def __init__(self):
        self.learning_rate = 1
        self.epoch = 10

    def train(self, features, labels):
        self.alpha = [0.0] * (len(features))
        self.bias = 0.0

        self.gram = [[0 for i in xrange(len(features))] for j in xrange(len(features))]

        print 'calc gram'
        # calc gram matrix
        for i in xrange(len(features)):
            for j in xrange(len(features)):
                sum = 0.0
                for k in xrange(len(features[0])):
                    sum += features[i][k] * features[j][k]
                self.gram[i][j] = sum
        print 'gram over'
        print self.gram
        idx = 0

        while idx < self.epoch:
            idx += 1
            print 'epoch: {}'.format(idx)
            print self.alpha
            print self.bias
            for i in xrange(len(features)):
                yi = labels[i]

                sum = 0.0
                for j in xrange(len(features)):
                    yj = labels[j]
                    sum += self.alpha[j]*yj*self.gram[j][i]

                if yi*(sum + self.bias) <= 0:
                    self.alpha[i] = self.alpha[i] + self.learning_rate
                    self.bias = self.bias + self.learning_rate * yi


        print self.alpha
        print self.bias


if __name__ == '__main__':

    p = dualPerceptron()
    data = [[3, 3,], [4, 3], [1, 1]]
    label = [1, 1, -1]
    p.train(data, label)

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
感知器的三种算法是原始形式对偶形式和对称形式。以下是使用Python实现这三种算法的示例代码: 1. 原始形式 ``` import numpy as np class Perceptron: def __init__(self, eta=0.1, n_iter=10): self.eta = eta self.n_iter = n_iter def fit(self, X, y): self.w_ = np.zeros(1 + X.shape[1]) self.errors_ = [] for _ in range(self.n_iter): errors = 0 for xi, target in zip(X, y): update = self.eta * (target - self.predict(xi)) self.w_[1:] += update * xi self.w_[0] += update errors += int(update != 0.0) self.errors_.append(errors) return self def net_input(self, X): return np.dot(X, self.w_[1:]) + self.w_[0] def predict(self, X): return np.where(self.net_input(X) >= 0.0, 1, -1) ``` 2. 对偶形式 ``` import numpy as np class DualPerceptron: def __init__(self, eta=0.1, n_iter=10): self.eta = eta self.n_iter = n_iter def fit(self, X, y): self.alpha_ = np.zeros(X.shape[0]) self.b_ = 0 gram_matrix = np.dot(X, X.T) for _ in range(self.n_iter): errors = 0 for i in range(X.shape[0]): if y[i] * (np.sum(self.alpha_ * y * gram_matrix[i]) + self.b_) <= 0: self.alpha_[i] += self.eta self.b_ += self.eta * y[i] errors += 1 if errors == 0: break self.w_ = np.sum(self.alpha_ * y * X.T, axis=1) return self def predict(self, X): return np.where(np.dot(X, self.w_) + self.b_ >= 0, 1, -1) ``` 3. 对称形式 ``` import numpy as np class SymmetricPerceptron: def __init__(self, eta=0.1, n_iter=10): self.eta = eta self.n_iter = n_iter def fit(self, X, y): self.w_ = np.zeros(X.shape[1]) self.b_ = 0 self.v_ = np.zeros(X.shape[1]) for _ in range(self.n_iter): errors = 0 for xi, target in zip(X, y): if target * (np.dot(xi, self.w_) + self.b_) <= 0: self.w_ += self.eta * target * xi + self.v_ self.b_ += self.eta * target self.v_ = self.eta * target * xi errors += 1 if errors == 0: break return self def predict(self, X): return np.where(np.dot(X, self.w_) + self.b_ >= 0, 1, -1) ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值