What is a one-sided limits?

What is a one-sided limits?

Introduction

One-sided Limits and Two-sided Limits

People are familiar with two sided limits, shown below.

limx>af(x)=L(1)

But here, we are going to introduce one-sided limits and shown the correlative between them.
The expression of one-sided limits are shown below.

limx>a+f(x)=L(2)

limx>af(x)=L(3)

The different is the superscript of a, which means the direction of approaching. E.g., the formula (2) means that f(x) is close to L , as the provided x is approach a from the right hand side.

And the same as formula (3), which means that f(x) is close to L , as the provided x is approach a from the left hand side.



Graph

See the graph down there.

one-sided limits

1. one-sided limits


Compare graph with formula (2) and (3). You can say that f(x) is getting close to 1 if x approach 2 from the left-hand side. And you can also say that f(x) is getting close to 3 if x approach 2 from the right-hand side.



Correlation between One-sided Limits and Two-sided Limits

When one-sided limits from both sides are equal to the same value, we could simply use two-sided limits to express. Whereas, the one-sided limits is not exactly the same, we have use one-sided limits.

E.g.,

iflimx>a+f(x)=Llimx>af(x)=Llimx>af(x)=L

iflimx>a+f(x)=L1limx>af(x)=L2(L1L2)limx>af(x)=L

Summary

  1. One-sided limits is not equal to two-sided limits.
  2. If both sides of one-sided limits have the same value, we could make it two-sided limits.

Reference

[1] Introduction of Calculus in Coursera by Jim Fowler



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用One-Sided Selection算法进行欠抽样的示例代码: ```python from collections import Counter import numpy as np from sklearn.neighbors import NearestNeighbors def one_sided_selection(X, y): """ 使用One-Sided Selection算法进行欠抽样 :param X: 特征矩阵 :param y: 标签数组 :return: 欠抽样后的特征矩阵和标签数组 """ # 查看原始类别分布 print('Original dataset shape %s' % Counter(y)) # 计算每个样本到其k个最近邻样本的平均距离 k = 5 neigh = NearestNeighbors(n_neighbors=k+1) neigh.fit(X) dist, _ = neigh.kneighbors(X) avg_dist = np.mean(dist[:, 1:], axis=1) # 将样本按照标签分成两类 X_pos, X_neg = X[y == 1], X[y == 0] avg_dist_pos, avg_dist_neg = avg_dist[y == 1], avg_dist[y == 0] # 选取与少数类样本距离最近的大多数类样本 idx = np.argsort(avg_dist_pos)[:len(X_pos)] X_resampled = np.concatenate((X_pos, X_neg[idx]), axis=0) y_resampled = np.concatenate((np.ones(len(X_pos)), np.zeros(len(X_pos))), axis=0) # 查看欠抽样后的类别分布 print('Resampled dataset shape %s' % Counter(y_resampled)) return X_resampled, y_resampled ``` 使用示例: ```python from sklearn.datasets import make_classification # 生成一个二分类的不平衡数据集 X, y = make_classification(n_samples=10000, n_features=20, n_informative=10, n_redundant=5, n_classes=2, weights=[0.9, 0.1], random_state=42) # 进行欠抽样 X_resampled, y_resampled = one_sided_selection(X, y) ``` 在上述示例中,我们使用了`sklearn`库中的`make_classification`函数生成一个二分类的不平衡数据集,然后使用`NearestNeighbors`类计算每个样本到其k个最近邻样本的平均距离。我们将样本按照标签分成两类,并选取与少数类样本距离最近的大多数类样本进行欠抽样。最后,我们使用`collections`库中的`Counter`函数来查看原始数据集和欠抽样后的数据集的类别分布。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值