task3_特征工程

1、导入包并读取数据

# 包导入
import pandas as pd
import numpy as np
import tsfresh as tsf
from tsfresh import extract_features, select_features#时间序列特征处理工具 Tsfresh(TimeSeries Fresh)
from tsfresh.utilities.dataframe_functions import impute
# 数据读取
data_train = pd.read_csv("train.csv")
data_test_A = pd.read_csv("testA.csv")

print(data_train.shape)
print(data_test_A.shape)
(100000, 3)
(20000, 2)
data_train.head()
idheartbeat_signalslabel
000.9912297987616655,0.9435330436439665,0.764677...0.0
110.9714822034884503,0.9289687459588268,0.572932...0.0
221.0,0.9591487564065292,0.7013782792997189,0.23...2.0
330.9757952826275774,0.9340884687738161,0.659636...0.0
440.0,0.055816398940721094,0.26129357194994196,0...2.0
data_test_A.head()
idheartbeat_signals
01000000.9915713654170097,1.0,0.6318163407681274,0.13...
11000010.6075533139615096,0.5417083883163654,0.340694...
21000020.9752726292239277,0.6710965234906665,0.686758...
31000030.9956348033996116,0.9170249621481004,0.521096...
41000041.0,0.8879490481178918,0.745564725322326,0.531...

2、数据预处理

# 对心电特征进行行转列处理,同时为每个心电信号加入时间步特征time
train_heartbeat_df = data_train["heartbeat_signals"].str.split(",", expand=True).stack()
#stack()就是将dataframe的列变成行,stack的英文意思就是“堆叠”。形象的理解就是,在使用了stack()函数后,“行”会越来越多。
train_heartbeat_df = train_heartbeat_df.reset_index()#重置索引,此时会将元数据的索引和上步缺省的列名也生成列level_0和level_1,此时train_heartbeat_df变成了数据框
train_heartbeat_df = train_heartbeat_df.set_index("level_0")#将level_0(即原始数据的索引)设置为索引
train_heartbeat_df.index.name = None#索引名不要
train_heartbeat_df.rename(columns={"level_1":"time", 0:"heartbeat_signals"}, inplace=True)#重命名列
train_heartbeat_df["heartbeat_signals"] = train_heartbeat_df["heartbeat_signals"].astype(float)

train_heartbeat_df
train_heartbeat_df = data_train["heartbeat_signals"].str.split(",", expand=True).stack()
train_heartbeat_df
0      0      0.9912297987616655
       1      0.9435330436439665
       2      0.7646772997256593
       3      0.6185708990212999
       4      0.3796321642826237
                     ...        
99999  200                   0.0
       201                   0.0
       202                   0.0
       203                   0.0
       204                   0.0
Length: 20500000, dtype: object
train_heartbeat_df = train_heartbeat_df.reset_index()
train_heartbeat_df
level_0level_10
0000.9912297987616655
1010.9435330436439665
2020.7646772997256593
3030.6185708990212999
4040.3796321642826237
............
20499995999992000.0
20499996999992010.0
20499997999992020.0
20499998999992030.0
20499999999992040.0

20500000 rows × 3 columns

train_heartbeat_df = train_heartbeat_df.set_index("level_0")
train_heartbeat_df
level_10
level_0
000.9912297987616655
010.9435330436439665
020.7646772997256593
030.6185708990212999
040.3796321642826237
.........
999992000.0
999992010.0
999992020.0
999992030.0
999992040.0

20500000 rows × 2 columns

train_heartbeat_df.index.name = None
train_heartbeat_df
level_10
000.9912297987616655
010.9435330436439665
020.7646772997256593
030.6185708990212999
040.3796321642826237
.........
999992000.0
999992010.0
999992020.0
999992030.0
999992040.0

20500000 rows × 2 columns

type(train_heartbeat_df)
pandas.core.frame.DataFrame
train_heartbeat_df.rename(columns={"level_1":"time", 0:"heartbeat_signals"}, inplace=True)#重命名列
train_heartbeat_df
timeheartbeat_signals
000.9912297987616655
010.9435330436439665
020.7646772997256593
030.6185708990212999
040.3796321642826237
.........
999992000.0
999992010.0
999992020.0
999992030.0
999992040.0

20500000 rows × 2 columns

train_heartbeat_df.heartbeat_signals.dtypes
dtype('O')
train_heartbeat_df["heartbeat_signals"] = train_heartbeat_df["heartbeat_signals"].astype(float)
train_heartbeat_df
timeheartbeat_signals
000.991230
010.943533
020.764677
030.618571
040.379632
.........
999992000.000000
999992010.000000
999992020.000000
999992030.000000
999992040.000000

20500000 rows × 2 columns

# 将处理后的心电特征加入到训练数据中,同时将训练数据label列单独存储
data_train_label = data_train["label"]
data_train = data_train.drop("label", axis=1)
data_train = data_train.drop("heartbeat_signals", axis=1)
data_train = data_train.join(train_heartbeat_df)
#join方法提供了一个简便的方法用于将两个DataFrame中的不同的列索引合并成为一个DataFrame。其中参数的意义与merge方法基本相同,只是join方法默认为左外连接how=left。
data_train
idtimeheartbeat_signals
0000.991230
0010.943533
0020.764677
0030.618571
0040.379632
............
99999999992000.000000
99999999992010.000000
99999999992020.000000
99999999992030.000000
99999999992040.000000

20500000 rows × 3 columns

data_train[data_train["id"]==1]
idtimeheartbeat_signals
1100.971482
1110.928969
1120.572933
1130.178457
1140.122962
............
112000.000000
112010.000000
112020.000000
112030.000000
112040.000000

205 rows × 3 columns

3、使用 tsfresh 进行时间序列特征处理

1.特征抽取 **Tsfresh(TimeSeries Fresh)**是一个Python第三方工具包。 它可以自动计算大量的时间序列数据的特征。此外,该包还包含了特征重要性评估、特征选择的方法,因此,不管是基于时序数据的分类问题还是回归问题,tsfresh都会是特征提取一个不错的选择。官方文档:Introduction — tsfresh 0.17.1.dev24+g860c4e1 documentation

from tsfresh.feature_extraction import extract_features,MinimalFCParameters

# 特征提取
settings=MinimalFCParameters()
train_features = extract_features(data_train, column_id='id', column_sort='time',default_fc_parameters = settings)
train_features.head()
Feature Extraction: 100%|██████████████████████████████████████████████████████████████| 10/10 [01:02<00:00,  6.25s/it]
heartbeat_signals__sum_valuesheartbeat_signals__medianheartbeat_signals__meanheartbeat_signals__lengthheartbeat_signals__standard_deviationheartbeat_signals__varianceheartbeat_signals__root_mean_squareheartbeat_signals__maximumheartbeat_signals__minimum
038.9279450.1255310.189892205.00.2297830.0528000.2980931.0000000.0
119.4456340.0304810.094857205.00.1690800.0285880.1938711.0000000.0
221.1929740.0000000.103380205.00.1841190.0339000.2111571.0000000.0
342.1130660.2413970.205430205.00.1861860.0346650.2772481.0000000.0
469.7567860.0000000.340277205.00.3662130.1341120.4999010.9999080.0

2.特征选择 train_features中包含了heartbeat_signals的779种常见的时间序列特征(所有这些特征的解释可以去看官方文档),这其中有的特征可能为NaN值(产生原因为当前数据不支持此类特征的计算),使用以下方式去除NaN值:

from tsfresh.utilities.dataframe_functions import impute

# 去除抽取特征中的NaN值
impute(train_features)
heartbeat_signals__sum_valuesheartbeat_signals__medianheartbeat_signals__meanheartbeat_signals__lengthheartbeat_signals__standard_deviationheartbeat_signals__varianceheartbeat_signals__root_mean_squareheartbeat_signals__maximumheartbeat_signals__minimum
038.9279450.1255310.189892205.00.2297830.0528000.2980931.0000000.0
119.4456340.0304810.094857205.00.1690800.0285880.1938711.0000000.0
221.1929740.0000000.103380205.00.1841190.0339000.2111571.0000000.0
342.1130660.2413970.205430205.00.1861860.0346650.2772481.0000000.0
469.7567860.0000000.340277205.00.3662130.1341120.4999010.9999080.0
..............................
9999563.3234490.3884020.308895205.00.2116360.0447900.3744411.0000000.0
9999669.6575340.4211380.339793205.00.1999660.0399860.3942661.0000000.0
9999740.8970570.2133060.199498205.00.2006570.0402630.2829541.0000000.0
9999842.3333030.2649740.206504205.00.1643800.0270210.2639411.0000000.0
9999953.2901170.3201240.259952205.00.1948680.0379740.3248831.0000000.0

100000 rows × 9 columns

接下来,按照特征和响应变量之间的相关性进行特征选择,这一过程包含两步:首先单独计算每个特征和响应变量之间的相关性,然后利用Benjamini-Yekutieli procedure [1] 进行特征选择,决定哪些特征可以被保留。

from tsfresh import select_features

# 按照特征和数据label之间的相关性进行特征选择
train_features_filtered = select_features(train_features, data_train_label)

train_features_filtered
heartbeat_signals__sum_valuesheartbeat_signals__medianheartbeat_signals__meanheartbeat_signals__standard_deviationheartbeat_signals__varianceheartbeat_signals__root_mean_squareheartbeat_signals__maximumheartbeat_signals__minimum
038.9279450.1255310.1898920.2297830.0528000.2980931.0000000.0
119.4456340.0304810.0948570.1690800.0285880.1938711.0000000.0
221.1929740.0000000.1033800.1841190.0339000.2111571.0000000.0
342.1130660.2413970.2054300.1861860.0346650.2772481.0000000.0
469.7567860.0000000.3402770.3662130.1341120.4999010.9999080.0
...........................
9999563.3234490.3884020.3088950.2116360.0447900.3744411.0000000.0
9999669.6575340.4211380.3397930.1999660.0399860.3942661.0000000.0
9999740.8970570.2133060.1994980.2006570.0402630.2829541.0000000.0
9999842.3333030.2649740.2065040.1643800.0270210.2639411.0000000.0
9999953.2901170.3201240.2599520.1948680.0379740.3248831.0000000.0

100000 rows × 8 columns

可以看到经过特征选择,留下了700个特征。


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值