高斯核——Python实现

import numpy as np
from scipy.spatial.distance import cdist
from scipy.optimize import minimize
RBF kernel

k ( x , y ) = e x p ( − ∣ ∣ x − y ∣ ∣ 2 2 σ 2 ) k(x,y) = exp(-\frac{||x - y||^2}{2\sigma^2}) k(x,y)=exp(2σ2xy2)

The function rbf_kernel computes the radial basis function (RBF) kernel between two vectors. This kernel is defined as:
k ( x , y ) = e x p ( − γ ∣ ∣ x − y ∣ ∣ 2 ) k(x,y) = exp(-\gamma||x - y||^2) k(x,y)=exp(γxy2)
where x x x and y y y are the input vectors. If γ = σ − 2 \gamma = \sigma^{-2} γ=σ2the kernel is known as the Gaussian kernel of variance σ − 2 \sigma^{-2} σ2.

The euclidean distance between a pair of rowvector x and y is computed as:
d i s t ( x , y ) = s q r t ( d o t ( x , x ) − 2 ∗ d o t ( x , y ) + d o t ( y , y ) ) dist(x,y)= sqrt(dot(x,x)-2 * dot(x,y)+ dot(y,y)) dist(x,y)=sqrt(dot(x,x)2dot(x,y)+dot(y,y))
This formulation has two advantages over other ways of computing distances.

  1. First, it is computationally efficient when dealing with sparse data.
  2. Second, if one argument varies but the other remains unchanged, then dot(x, x) and/or dot(y, y) can be pre-computed.

There are four ways to obtain kernel matrixs.

  • According to the theory.
def rbf_kernel_1(X,Y,gamma = 0.1):
    """
    param X: ndarray of shape (n_samples_X,n_features)
    param Y: ndarray of shape (n_samples_Y,n_features)
    param gamma: if None, default to 0.1, refer to 1 / n_features
    Return: kernel_matrix: ndarray of shape(n_samples_X,n_samples_Y)
    """
    #compute X_norm_squared,Y_norm_squared
    if X.ndim == 1: #dim of X, if dim_x = 1
        X_norm_squared = X **2
        Y_norm_squared = Y **2
    else: #dim >= 2
        X_norm_squared = (X **2).sum(axis = 1).reshape(X.shape[0],1)
        Y_norm_squared = (Y **2).sum(axis = 1).reshape(Y.shape[0],1)
        #X_norm_squared : array-like of shape (n_samples_X,)
        #Y_norm_squared : array-like of shape (n_samples_Y,)
    squared_Euclidean_distances = Y_norm_squared[:,] + X_norm_squared.T - 2 * np.dot(Y,X.T)
    return np.exp(-squared_Euclidean_distances * gamma)
  • by using scipy distance cdist to get Euclidean distances
def rbf_kernel_2(X,Y,gamma = 0.1):
    if X.ndim==X.ndim and X.ndim==2: # both matrices
        return np.exp(-gamma * cdist(X,Y)**2)
    else: # both vectors or a vector and a matrix
        return np.exp(- gamma * ( dot(X,X.T) + dot(Y,Y.T)- 2*dot(X,Y)))
  • The most clear and rough way.
def rbf_kernel_3(X,Y, gamma = 0.1):
    dist_matrix = np.sum(X**2, 1).reshape(-1, 1) + np.sum(Y**2, 1) - 2 * np.dot(X,Y.T)
    return np.exp(-gamma * dist_matrix)
  • rbf_kernel reference from sklearn
from sklearn.metrics.pairwise import rbf_kernel
def rbf_kernel_4(X,Y,gamma = 0.1):
    return rbf_kernel(X,Y,gamma)

Having a test as follows.

if "__name__" == "main":
    n = 100
    dim = 1
    X = np.array(np.linspace(1,10,n)).reshape(n,dim)
    Y = np.array(np.linspace(1,10,n)).reshape(n,dim)
print(rbf_kernel_1(X,Y,gamma = 0.1))
print("----------------------------------------------------")
print(rbf_kernel_2(X,Y,gamma = 0.1))
print("----------------------------------------------------")
print(rbf_kernel_3(X,Y,gamma = 0.1))
print("----------------------------------------------------")
print(rbf_kernel_4(X,Y,gamma = 0.1))
[[1.00000000e+00 9.99173895e-01 9.96699673e-01 ... 4.19673698e-04
  3.57208797e-04 3.03539138e-04]
 [9.99173895e-01 1.00000000e+00 9.99173895e-01 ... 4.92247497e-04
  4.19673698e-04 3.57208797e-04]
 [9.96699673e-01 9.99173895e-01 1.00000000e+00 ... 5.76417873e-04
  4.92247497e-04 4.19673698e-04]
 ...
 [4.19673698e-04 4.92247497e-04 5.76417873e-04 ... 1.00000000e+00
  9.99173895e-01 9.96699673e-01]
 [3.57208797e-04 4.19673698e-04 4.92247497e-04 ... 9.99173895e-01
  1.00000000e+00 9.99173895e-01]
 [3.03539138e-04 3.57208797e-04 4.19673698e-04 ... 9.96699673e-01
  9.99173895e-01 1.00000000e+00]]
----------------------------------------------------
[[1.00000000e+00 9.99173895e-01 9.96699673e-01 ... 4.19673698e-04
  3.57208797e-04 3.03539138e-04]
 [9.99173895e-01 1.00000000e+00 9.99173895e-01 ... 4.92247497e-04
  4.19673698e-04 3.57208797e-04]
 [9.96699673e-01 9.99173895e-01 1.00000000e+00 ... 5.76417873e-04
  4.92247497e-04 4.19673698e-04]
 ...
 [4.19673698e-04 4.92247497e-04 5.76417873e-04 ... 1.00000000e+00
  9.99173895e-01 9.96699673e-01]
 [3.57208797e-04 4.19673698e-04 4.92247497e-04 ... 9.99173895e-01
  1.00000000e+00 9.99173895e-01]
 [3.03539138e-04 3.57208797e-04 4.19673698e-04 ... 9.96699673e-01
  9.99173895e-01 1.00000000e+00]]
----------------------------------------------------
[[1.00000000e+00 9.99173895e-01 9.96699673e-01 ... 4.19673698e-04
  3.57208797e-04 3.03539138e-04]
 [9.99173895e-01 1.00000000e+00 9.99173895e-01 ... 4.92247497e-04
  4.19673698e-04 3.57208797e-04]
 [9.96699673e-01 9.99173895e-01 1.00000000e+00 ... 5.76417873e-04
  4.92247497e-04 4.19673698e-04]
 ...
 [4.19673698e-04 4.92247497e-04 5.76417873e-04 ... 1.00000000e+00
  9.99173895e-01 9.96699673e-01]
 [3.57208797e-04 4.19673698e-04 4.92247497e-04 ... 9.99173895e-01
  1.00000000e+00 9.99173895e-01]
 [3.03539138e-04 3.57208797e-04 4.19673698e-04 ... 9.96699673e-01
  9.99173895e-01 1.00000000e+00]]
----------------------------------------------------
[[1.00000000e+00 9.99173895e-01 9.96699673e-01 ... 4.19673698e-04
  3.57208797e-04 3.03539138e-04]
 [9.99173895e-01 1.00000000e+00 9.99173895e-01 ... 4.92247497e-04
  4.19673698e-04 3.57208797e-04]
 [9.96699673e-01 9.99173895e-01 1.00000000e+00 ... 5.76417873e-04
  4.92247497e-04 4.19673698e-04]
 ...
 [4.19673698e-04 4.92247497e-04 5.76417873e-04 ... 1.00000000e+00
  9.99173895e-01 9.96699673e-01]
 [3.57208797e-04 4.19673698e-04 4.92247497e-04 ... 9.99173895e-01
  1.00000000e+00 9.99173895e-01]
 [3.03539138e-04 3.57208797e-04 4.19673698e-04 ... 9.96699673e-01
  9.99173895e-01 1.00000000e+00]]
  • 10
    点赞
  • 26
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
高斯消去法是一种解线性方程组的常用方法。通过进行列主元消去和回代过程,可以求得方程组的解。在Python中,可以使用高斯消去法来实现。 下面是一个使用高斯消去法解线性方程组的Python实现的示例代码: ```python import numpy as np def gaussian_elimination(A, b): # 将增广矩阵合并 augA = np.concatenate((A, b), axis=1) rows = A.shape / pivot augA[i, :] -= augA[k, :] * ratio # 回代过程 x = np.zeros((rows, 1)) for i in range(rows-1, -1, -1): x[i = (augA[i, -1 - np.dot(augA[i, :-1], x)) / augA[i, i] return x.flatten() # 示例 A = np.array([[1, 1, 1, 1], [0, 4,-1, 2], [2,-2, 1, 4], [3, 1,-3, 2]], dtype='float') b = np.array([[10,13,17,4]], dtype='float').T x = gaussian_elimination(A, b) print('解为:', x) ``` 通过调用`gaussian_elimination`函数,传入系数矩阵A和常数向量b,即可求解线性方程组的解。在示例中,解为x = [1.0, 2.0, 3.0, 4.0]。 需要注意的是,高斯消去法对于奇异矩阵(行列式为0)或近似奇异矩阵的情况可能会产生误差。在实际应用中,可以通过使用部分主元消去法或其他优化方法来提高精度和效率。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [基于python高斯Gauss列主元消去法.py](https://download.csdn.net/download/hk8145311/12076541)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* *3* [解线性方程组的python实现(1)——高斯主元消去法](https://blog.csdn.net/xfijun/article/details/108413503)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值