python中for...in...的内部实现原理

for循环遍历其实就是取出可迭代对象中的迭代器然后对迭代器不断的间隙next()操作,再处理掉最后一次对迭代器next()时抛出的异常.

下面我们使用一个while模拟了for...in...的实现

lists = [i * 2 for i in range(5)]
for temp in lists:
    print(temp, end='')

print('\r\n下面是使用while模拟for...in...的输出')

iterator_ = iter(lists)
while True:
    try:
        print(next(iterator_), end='')
    except StopIteration as ret:
        # print(ret)
        break

输出结果是

02468
下面是使用while模拟for...in...的输出
02468



  • 4
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
KNN(K-Nearest Neighbors)算法是一种非常简单但又非常有效的分类和回归方法。它的原理是:在训练集找出与测试数据最接近的K个数据,然后根据这K个数据的分类,确定测试数据的分类。 下面是使用Python实现KNN算法的步骤: 1. 导入必要的库 ```python import numpy as np from collections import Counter ``` 2. 定义KNN类 ```python class KNN: def __init__(self, k=3): self.k = k ``` 3. 定义距离函数 ```python def euclidean_distance(self, x1, x2): return np.sqrt(np.sum((x1 - x2) ** 2)) ``` 4. 训练模型 ```python def fit(self, X, y): self.X_train = X self.y_train = y ``` 5. 预测函数 ```python def predict(self, X): y_pred = [self._predict(x) for x in X] return np.array(y_pred) ``` 6. 内部预测函数 ```python def _predict(self, x): distances = [self.euclidean_distance(x, x_train) for x_train in self.X_train] k_indices = np.argsort(distances)[:self.k] k_nearest_labels = [self.y_train[i] for i in k_indices] most_common = Counter(k_nearest_labels).most_common(1) return most_common[0][0] ``` 完整代码如下: ```python import numpy as np from collections import Counter class KNN: def __init__(self, k=3): self.k = k def euclidean_distance(self, x1, x2): return np.sqrt(np.sum((x1 - x2) ** 2)) def fit(self, X, y): self.X_train = X self.y_train = y def predict(self, X): y_pred = [self._predict(x) for x in X] return np.array(y_pred) def _predict(self, x): distances = [self.euclidean_distance(x, x_train) for x_train in self.X_train] k_indices = np.argsort(distances)[:self.k] k_nearest_labels = [self.y_train[i] for i in k_indices] most_common = Counter(k_nearest_labels).most_common(1) return most_common[0][0] ``` 使用KNN算法进行分类的示例代码: ```python from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score # 加载数据集 iris = load_iris() X = iris.data y = iris.target # 划分训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # 构建KNN模型 knn = KNN(k=3) knn.fit(X_train, y_train) # 进行预测 y_pred = knn.predict(X_test) # 计算准确率 accuracy = accuracy_score(y_test, y_pred) print("Accuracy:", accuracy) ``` 注意:KNN算法的效果非常依赖于数据的质量和特征的选取,因此在实际应用需要进行多次尝试和调整。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值