np.cross函数详解

numpy.cross

Reference: Official Document of Numpy

语法

numpy.cross(a, b, axisa=-1, axisb=-1, axisc=-1, axis=None)

功能

Return the cross product of two (arrays of) vectors.
The cross product of a and b in :math:R^3 is a vector perpendicular
to both a and b. If a and b are arrays of vectors, the vectors
are defined by the last axis of a and b by default, and these axes
can have dimensions 2 or 3. Where the dimension of either a or b is
2, the third component of the input vector is assumed to be zero and the
cross product calculated accordingly. In cases where both input vectors
have dimension 2, the z-component of the cross product is returned.

计算两个向量(向量数组)的叉乘。叉乘返回的数组既垂直于a,又垂直于b。 如果a,b是向量数组,则向量在最后一维定义。该维度可以为2,也可以为3. 为2的时候会自动将第三个分量视作0补充进去计算。

Parameters

  • a : array_like
    Components of the first vector(s).
  • b : array_like
    Components of the second vector(s).
  • axisa : int, optional
    Axis of a that defines the vector(s). By default, the last axis.
  • axisb : int, optional
    Axis of b that defines the vector(s). By default, the last axis.
  • axisc : int, optional
    Axis of c containing the cross product vector(s). Ignored if
    both input vectors have dimension 2, as the return is scalar.
    By default, the last axis.
  • axis : int, optional
    If defined, the axis of a, b and c that defines the vector(s)
    and cross product(s). Overrides axisa, axisb and axisc.

axisa, axisb, axisc 分别指定两个输入和输出c的向量所在的维度。而axis则可以覆盖前三个参数,为全局指定向量所在维度。

Returns

  • c : ndarray
    Vector cross product(s).

Raises

  • ValueError:
    When the dimension of the vector(s) in a and/or b does not
    equal 2 or 3.

当向量所在axis的dimension不为2或者3时,raise ValueError.

See Also(相关函数)

  • inner : Inner product 内积
  • outer : Outer product 外积
  • ix_ : Construct index arrays.

Notes

… versionadded:: 1.9.0
Supports full broadcasting of the inputs.
支持广播。

Examples


Vector cross-product.
>>> x = [1, 2, 3]
>>> y = [4, 5, 6]
>>> np.cross(x, y)
array([-3,  6, -3])

One vector with dimension 2.
>>> x = [1, 2]
>>> y = [4, 5, 6]
>>> np.cross(x, y)
array([12, -6, -3])

Equivalently:
>>> x = [1, 2, 0]
>>> y = [4, 5, 6]
>>> np.cross(x, y)
array([12, -6, -3])

Both vectors with dimension 2.
>>> x = [1,2]
>>> y = [4,5]
>>> np.cross(x, y)
array(-3)

Multiple vector cross-products. Note that the direction of the cross
product vector is defined by the `right-hand rule`.
>>> x = np.array([[1,2,3], [4,5,6]])
>>> y = np.array([[4,5,6], [1,2,3]])
>>> np.cross(x, y)
array([[-3,  6, -3],
        [ 3, -6,  3]])

The orientation of `c` can be changed using the `axisc` keyword.
>>> np.cross(x, y, axisc=0)
array([[-3,  3],
        [ 6, -6],
        [-3,  3]])
        
Change the vector definition of `x` and `y` using `axisa` and `axisb`.
>>> x = np.array([[1,2,3], [4,5,6], [7, 8, 9]])
>>> y = np.array([[7, 8, 9], [4,5,6], [1,2,3]])
>>> np.cross(x, y)
array([[ -6,  12,  -6],
        [  0,   0,   0],
        [  6, -12,   6]])
>>> np.cross(x, y, axisa=0, axisb=0)
array([[-24,  48, -24],
        [-30,  60, -30],
        [-36,  72, -36]])
### 损失函数的概念 损失函数是一种用于评估机器学习模型性能的重要工具。它通过量化预测值与实际值之间的差异来反映模型的表现质量[^1]。具体而言,损失函数越低,则表示模型的预测能力越好。 在训练过程中,优化算法的目标是最小化损失函数的值。这是因为较小的损失意味着模型能够更精确地拟合数据并做出更好的预测[^2]。 --- ### 常见的损失函数及其适用场景 以下是几种常见的损失函数以及它们的应用领域: #### 1. **均方误差 (Mean Squared Error, MSE)** MSE 是回归问题中最常用的损失函数之一。它的定义如下: \[ L(y,\hat{y}) = \frac{1}{n} \sum_{i=1}^{n}(y_i-\hat{y}_i)^2 \] 其中 \( y \) 表示真实值,\( \hat{y} \) 表示预测值,\( n \) 表示样本数量。由于平方项的存在,较大的错误会被放大,因此该方法对异常值较为敏感。 ```python import numpy as np def mse_loss(y_true, y_pred): return ((y_true - y_pred) ** 2).mean() ``` #### 2. **平均绝对误差 (Mean Absolute Error, MAE)** MAE 同样适用于回归问题,但它不会像 MSE 那样惩罚较大偏差。其公式为: \[ L(y,\hat{y}) = \frac{1}{n}\sum_{i=1}^{n}|y_i-\hat{y}_i| \] 相比 MSE,MAE 对异常值的影响更为稳健。 ```python def mae_loss(y_true, y_pred): return abs(y_true - y_pred).mean() ``` #### 3. **交叉熵损失 (Cross-Entropy Loss)** 交叉熵主要用于分类任务,尤其是多类别分类问题。对于二元分类问题,可以采用 sigmoid 函数配合 binary cross-entropy;而对于多类别分类则常使用 softmax 和 categorical cross-entropy。其基本形式为: \[ L(y,\hat{y}) = -\frac{1}{n}\sum_{i=1}^{n}[y_i\log(\hat{y}_i)+(1-y_i)\log(1-\hat{y}_i)] \] 此公式的前提是目标变量已被编码成概率分布的形式。 ```python from scipy.special import log_softmax def cross_entropy_loss(y_true, logits): log_probs = log_softmax(logits, axis=-1) loss = -(y_true * log_probs).sum(axis=-1).mean() return loss ``` #### 4. **Huber 损失** 当面对可能含有噪声的数据集时,Huber 损失提供了一种折衷方案。它结合了 MSE 的平滑性和 MAE 的鲁棒性特点。如果残差小于某个阈值 δ,则按二次方式增长;否则线性增加。 ```python def huber_loss(y_true, y_pred, delta=1.0): error = y_true - y_pred is_small_error = abs(error) <= delta squared_loss = 0.5 * (error ** 2) linear_loss = delta * (abs(error) - 0.5 * delta) return tf.where(is_small_error, squared_loss, linear_loss).mean() ``` --- ### 如何选择合适的损失函数? 选择适当的损失函数取决于以下几个因素: - 数据特性:是否存在大量离群点? - 业务需求:是否希望某些类型的误判受到更多关注? - 计算效率:复杂度较高的损失函数可能会延长训练时间。 例如,在金融风控领域,假阳性可能导致不必要的经济损失,此时可考虑调整权重参数使模型更加注重减少此类失误。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值