SFA——慢特征分析及代码实现

最近在看SFA的相关知识,网上的资料很杂,整理一下便于以后翻看:

看了这三篇大致了解了SFA的原理

第一篇:视频识别算法:慢特征分析算法

第二篇:慢特征分析(SFA)

第三篇:机器学习教程 之 慢特征分析:时序特征挖掘

准备用SFA做时序数据预测,等搞懂实现再来追加啦~

哈哈哈,代码搞懂了,其实就几行简单的代码,先做特征值计算,接着再把想要的成分提取出来,参考网上的代码整理一下:

import numpy as np

class SFA:  # slow feature analysis class
    def __init__(self):
        self._Z = []
        self._B = []
        self._eigenVector = []

    def getB(self, data):
        self._B = np.matrix(data.T.dot(data)) / (data.shape[0] - 1)

    def getZ(self, data):
        derivativeData = self.makeDiff(data)
        self._Z = np.matrix(derivativeData.T.dot(derivativeData)) / (derivativeData.shape[0] - 1)

    def makeDiff(self, data):
        diffData = np.mat(np.zeros((data.shape[0], data.shape[1])))
        for i in range(data.shape[1] - 1):
            diffData[:, i] = data[:, i] - data[:, i + 1]
        diffData[:, -1] = data[:, -1] - data[:, 0]
        return np.mat(diffData)

    def fit_transform(self, data, threshold=1e-7, conponents=-1):
        if conponents == -1:
            conponents = data.shape[0]
        self.getB(data)
        U, s, V = np.linalg.svd(self._B)

        count = len(s)
        for i in range(len(s)):
            if s[i] ** (0.5) < threshold:
                count = i
                break
        s = s[0:count]
        s = s ** 0.5
        S = (np.mat(np.diag(s))).I
        U = U[:, 0:count]
        whiten = S * U.T
        Z = (whiten * data.T).T

        self.getZ(Z)
        PT, O, P = np.linalg.svd(self._Z)

        self._eigenVector = P * whiten
        self._eigenVector = self._eigenVector[-1 * conponents:, :]

        return data.dot(self._eigenVector.T)

    def transfer(self, data):
        return data.dot(self._eigenVector.T)

以上就是SFA的代码,调用也很简单:

# 首先定义SFA()
sfa = SFA()
# 接着利用训练集使用SFA提取慢特征,x_train是训练集的输入,conponents是要提取的前n个慢特征
trainDataS = sfa.fit_transform(x_train, conponents=25)
# transfer函数是利用SFA提取数据的慢特征
testDataS = sfa.transfer(x_test)

 

提取完慢特征之后就可以进行回归或者分类了,可以采用多层感知机、偏最小二乘等方法,下面展示利用PLS进行预测的代码:

# 从sklearn中直接导入PLS
from sklearn.cross_decomposition import PLSRegression
# 定义模型结构
pls = PLSRegression(n_components=21)
# 模型训练
pls.fit(trainDataS, y_train)
# 模型预测
test_predict = pls.predict(testDataS).squeeze(1)

 

### 实现特征分析(SFA)算法的MATLAB代码 特征分析是一种无监督学习方法,旨在提取数据中的缓变化特征。以下是基于此原理的一个简单实现: ```matlab function [W, sfa_features] = slow_feature_analysis(data, num_components) % SLOW_FEATURE_ANALYSIS Performs Slow Feature Analysis on input data. % % Inputs: % data : Input data matrix where each column represents a sample. % num_components: Number of components to extract. % Centering the data by removing its mean value across all samples centered_data = bsxfun(@minus, data, mean(data, 2)); % Whitening transformation using PCA [~, ~, V] = svd(centered_data ./ sqrt(size(data, 2)), 'econ'); white_data = V' * centered_data; % Initialize weights randomly for linear mapping from whitened space W_init = randn(num_components, size(white_data, 1)); % Normalize initial weight vectors W_normalized = normalize(W_init); % Iteratively optimize weights according to SFA objective function max_iter = 100; tol = 1e-6; prev_obj_val = Inf; obj_vals = zeros(max_iter, 1); converged = false; iter = 1; while (~converged && iter <= max_iter) % Compute temporal derivative approximations via finite differences diff_matrix = diag([ones(iter, 1); -ones(iter:end-iter, 1)]); dXdt_approx = diff_matrix * white_data(:, :) / (size(diff_matrix, 1)-1); % Calculate instantaneous slowness values as squared norms over time windows inst_slowness = sum((bsxfun(@times, W_normalized', dXdt_approx)).^2, 2); % Update optimization target based on current iteration's performance metric curr_obj_val = mean(inst_slowness(:)); % Check convergence criteria against previous evaluation point if abs(curr_obj_val - prev_obj_val)/abs(prev_obj_val) < tol || ... isnan(curr_obj_val) || isinf(curr_obj_val) converged = true; end % Store history of objective function evaluations during training process obj_vals(iter) = curr_obj_val; % Perform gradient descent update rule with adaptive learning rate schedule learn_rate = exp(-log(1 + iter)^2); grad_W = -(inst_slowness .* dXdt_approx)' * white_data'; W_normalized = W_normalized - learn_rate * grad_W'; % Re-normalize updated parameters after applying gradients W_normalized = normalize(W_normalized'); % Prepare next round of iterations until stopping condition met prev_obj_val = curr_obj_val; iter = iter + 1; end % Project original dataset onto learned basis functions yielding final output features sfa_features = W_normalized' * white_data; end function normalized_vecs = normalize(vec_mat) % NORMALIZE Normalizes rows or columns of vectorized matrices depending upon dimensionality specified. % % Inputs: % vec_mat : Matrix containing row-wise or column-wise vectors requiring normalization. dim_to_norm = ndims(vec_mat)==2 ? 2 : 1; %#ok<UNRCH> norm_factors = sqrt(sum(abs(vec_mat).^2,dim_to_norm))+(~any(vec_mat,[],dim_to_norm)); %#ok<VASUM> normalized_vecs = bsxfun(@rdivide,vec_mat,norm_factors); end ``` 这段代码实现了基本形式下的SFA流程[^1]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值