机器学习:感知器 (Perceptron)

介绍

在本实验中,你将实现感知器(Perceptron)算法。实验之前需要上传图片:Perceptron_figure.png至默认路径。

评分标准如下:

# 引入所需要的库文件
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_iris

%matplotlib inline

#加载数据集
iris = load_iris() 

X = iris.data[:, (0, 1)] # petal length, petal width
y = (iris.target == 0).astype(int)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)

**要点 1:** **实现符号激活函数**,具体公式如下

如果结果为1.0,则计算通过。

# ====================== 在这里填入代码 ======================= 
def activation(z):
    if z > 0:
        output = 1.0
    else:
        output = 0.0
    return output 

# ============================================================= 
output=activation(5)
print('激活函数输出:\n',output)

**要点 2:** **实现感知器训练函数** 函数输入为:训练样本X,类别标记y,学习率learning_rate,批次epochs 函数输出为:权重参数weights,偏置bias 如果结果为weights=[-0.0492 0.0657], bias=0.069,则计算通过。

# ====================== 在这里填入代码 ======================= 
def fit(X, y, learning_rate, epochs):
    weights = np.zeros(X.shape[1])
    bias = 0
    for epoch in range(epochs):
        for i in range(X.shape[0]):
            z = np.dot(X[i], weights) + bias
            y_pre = activation(z)
            weights += learning_rate * (y[i] - y_pre) * X[i]
            bias += learning_rate * (y[i] - y_pre)  
    return weights, bias
# =============================================================
#
weights, bias=fit(X_train, y_train, 0.001, 300)
print('权重 weights:\n',np.around(weights,decimals=4))
print('偏置 bias :\n',np.around(bias,decimals=4))

**要点 3:** **实现预测函数** 函数输入为:训练样本X,类别标记y,学习率learning_rate,批次epochs 函数输出为:权重参数weights,偏置bias 如果结果为0,则计算通过。

# ====================== 在这里填入代码 ======================= 
def predict(weights, bias, X):
    y_pre = np.zeros(X.shape[0])
    for i in range(X.shape[0]):
        z = np.dot(X[i], weights) + bias
        y_pre[i] = activation(z)
    return y_pre
# =============================================================
X=np.random.random((1,100))
output=predict(np.zeros((X.shape[1])), 0, X)
print('预测函数输出:\n',output)

**要点 4:** **实现感知器函数** 并测试在iris测试数据X_test上的预测结果。 如果结果为1.0,则计算通过。

# ====================== 在这里填入代码 ======================= 
def MyPerceptron(X_train, y_train, X_test, learning_rate, epochs):
    weights, bias = fit(X_train, y_train, learning_rate, epochs)
    y_pre = predict(weights, bias, X_test)
    return y_pre, weights, bias 
# =============================================================
y_pre, weights, bias = MyPerceptron(X_train, y_train, X_test, 0.001, 300)
Acc_MyPerceptron=accuracy_score(y_pre, y_test)
print('分类精度:\n',Acc_MyPerceptron)

**要点 5:** **与sklearn内置Perceptron函数比较** 调用sklearn中Perceptron函数并测试在iris测试数据X_test上的预测结果。 如果结果为0.88,则计算通过。

# ====================== 在这里填入代码 ======================= 
from sklearn.linear_model import Perceptron
sk_perceptron = Perceptron()
sk_perceptron.fit(X_train, y_train)
y_pre_sk = sk_perceptron.predict(X_test)
Acc_sk_Perceptron = accuracy_score(y_pre_sk, y_test)
# =============================================================
sk_weights=np.squeeze(sk_perceptron.coef_) 
sk_bias=sk_perceptron.intercept_
print('分类精度:\n',Acc_sk_Perceptron)

**要点 6:** **可视化分类结果**

plt.figureO

A =X_train[y_train>0]

B =X_train[y_trains<= 0]

plt.scatter(A[:, 0, ],A[:, 1], color ="blue",label=' training:class2 ')

plt.scatter(B[:, 0, ],B[:, 1], color="red", label='training:class1' )

A= X_test[y_test>0]

B =X_test[y_test<=0]

plt.scatter(A[:, 0, ],A[:, 1], color="blue", marker=' d', label=' testing:class2')
plt.scatter(B[:, 0,], B[:, 1], color="red", marker='d',label=' testing:classl ')
x1=np.linspace(4, 8, 100)

x2 =-(weights[0]*x1+bias) /weights[1]

plt.plot(x1, x2,' g ', label=' MyPerceptron')

x1 =np.linspace(4, 8, 100)

x2 =-(sk, _weights[0]*x1 +sk_ bias)/sk_weights[l])

plt.plot (x1, x2, label-' skPercept ron',linestylef'--')

plt.xlim(4. 0,8. 0)

plt.ylim(2.0,5.0)

plt.title("Perceptron )

plt.legend()

plt.show()
# ====================== 在这里填入代码 ======================= 
plt.grid(True, linestyle='--', linewidth=0.5)
plt.plot(X_train[y_train == 0, 0], X_train[y_train == 0, 1], 'ro', label='Training: Class 1')
plt.plot(X_train[y_train == 1, 0], X_train[y_train == 1, 1], 'bo', label='Training: Class 2')
plt.plot(X_test[y_test == 0, 0], X_test[y_test == 0, 1], 'D', color='lightcoral', label='Testing: Class 1')
plt.plot(X_test[y_test == 1, 0], X_test[y_test == 1, 1], 'D', color='lightskyblue', label='Testing: Class 2')
plt.plot([], [], color='green', linewidth=2, linestyle='solid', label='MyPerceptron')
plt.plot([], [], color='black', linewidth=2, linestyle='dashed', label='skPerceptron')
x_min, x_max = X[:, 0].min() - 0.5, 8*(X[:, 0].max() + 0.5)
y_min, y_max = X[:, 1].min() - 0.5, 8*(X[:, 1].max() + 0.5)
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02), np.arange(y_min, y_max, 0.02))
Z = predict(weights, bias, np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, colors='green', linewidths=2, linestyles='solid', levels=[0])
sk_weights = np.squeeze(sk_perceptron.coef_)
sk_bias = sk_perceptron.intercept_
sk_Z = predict(sk_weights, sk_bias, np.c_[xx.ravel(), yy.ravel()])
sk_Z = sk_Z.reshape(xx.shape)
plt.contour(xx, yy, sk_Z, colors='black', linewidths=2, linestyles='dashed', levels=[0])
plt.title('Perceptron')
plt.legend()
plt.show()

# ============================================================= 

  • 5
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值