Implement Deque with Python list

# Completed implementation of a deque ADT
class Deque:
    def __init__(self):
        self.data = []

    def is_empty(self):
        return self.data == []

    def add_front(self, item):
        self.data.append(item)

    def add_rear(self, item):
        self.data.insert(0, item)

    def remove_front(self):
        return self.data.pop()

    def remove_rear(self):
        return self.data.pop(0)

    def size(self):
        return len(self.data)

if __name__ == '__main__':
    dq = Deque()
    dq.add_front('hello')
    dq.add_front(123)
    dq.add_rear(True)
    dq.add_rear(3.14)
    while not dq.is_empty():
        print(dq.remove_front())
        print(dq.remove_rear())

A simple case to use Deque:

from deque import Deque

def pal_checker(str):
    '''Palindrome checker.
        eg. abcba is a palindrome
    '''
    ch_deque = Deque()
    for ch in str:
        ch_deque.add_front(ch)

    if ch_deque.size() < 2:   # Assume '' or only one character is palindrome
        return True

    while ch_deque.size() > 1:
        if ch_deque.remove_front() == ch_deque.remove_rear():
            continue
        else:
            break
    else:
        return True
    return False

if __name__ == '__main__':
    print(pal_checker('abcdedcba'))
    print(pal_checker('asdfghfdsa'))
    print(pal_checker('aabbcc'))
    print(pal_checker('asdfghjklkjhgfdsa'))


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Here's an implementation of the k-nearest neighbors (KNN) classifier in Python: ``` import numpy as np def knn_classifier(X_train, y_train, X_test, k): """ K-nearest neighbors classifier Parameters: X_train (numpy.ndarray): Training data features y_train (numpy.ndarray): Training data labels X_test (numpy.ndarray): Test data features k (int): Number of nearest neighbors to consider Returns: y_pred (numpy.ndarray): Predicted labels for test data """ # Calculate distances between test data and training data dists = np.sqrt(np.sum((X_train - X_test[:, np.newaxis])**2, axis=2)) # Get indices of k nearest neighbors for each test data point knn_indices = np.argsort(dists, axis=1)[:, :k] # Get labels of k nearest neighbors knn_labels = y_train[knn_indices] # Predict labels based on majority vote y_pred = np.apply_along_axis(lambda x: np.bincount(x).argmax(), axis=1, arr=knn_labels) return y_pred ``` This function takes in the training data features (`X_train`) and labels (`y_train`), test data features (`X_test`), and the number of nearest neighbors to consider (`k`). It first calculates the distances between each test data point and each training data point using the Euclidean distance metric. It then finds the indices of the k nearest neighbors for each test data point, and gets the corresponding labels. Finally, it predicts the label of each test data point based on the majority vote of its k nearest neighbors. Note that this implementation assumes that the input data is in the form of numpy arrays. If your data is in a different format, you may need to modify the function accordingly. Also, this implementation uses the `np.apply_along_axis()` function to apply a function to each row of a 2D array. This can be slower than using a loop, but is more concise and often easier to read. If performance is a concern, you may want to consider using a loop instead.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值