对“Task 3: 特征工程进阶,持续上分”的简析
对“Task 3: 特征工程进阶,持续上分”的简析
前言
在阅读“Task3:特征工程进阶,持续上分”文档后,平台给出了结合生物学角度的新特征、超参数调优、集成学习和混合学习等高级技术手段,通过这些优化策略,进一步提升模型的性能,以求在siRNA沉默效率预测中取得更加精准和稳定的结果。本文章在此方法基础上给出分析和优化方向的思考。(比较忙时间有限就只实现了部分)
提示:以下是本篇文章正文内容
1. 生物学角度的新特征
1.1 使用反义链与目标基因序列的序列匹配结果
概念: 通过反义链与目标基因序列的匹配增强模型表现。
背景: siRNA与目标mRNA的互补性影响其沉默机制,通过这种互补性来预测mRNA的保留水平。
代码如下:
from Bio import pairwise2
def get_feat_align(df, anti=False):
features = []
for idx, row in df.iterrows():
target_seq = str(row['gene_target_seq'])
siRNA_seq = str(row['siRNA_antisense_seq']) if anti else str(row['siRNA_sense_seq'])
alignments = pairwise2.align.localxx(target_seq, siRNA_seq)
score = alignments[0][2] if alignments else 0
features.append(score)
return pd.Series(features, name='align_score')
优化方向:
探索不同的比对方法: 除了局部比对,还可以尝试全局比对,以捕捉不同的相互作用模式。
多种评分标准: 使用多种评分标准(如匹配率、比对得分)来丰富特征表示。
序列片段比对: 将siRNA序列分成多个片段,与目标基因序列进行局部比对,以获取更精细的匹配特征。
1.2 使用GC含量作为特征
概念: 将GC含量作为模型预测的一个重要特征。
背景: GC含量影响siRNA双链的结合和解旋过程,理想的GC含量区间对沉默效率至关重要。
代码如下:
def siRNA_feat_builder(s, anti=False):
name = "anti" if anti else "sense"
df_feat = s.to_frame()
df_feat[f"feat_siRNA_{name}_GC_frac"] = (s.str.count("G") + s.str.count("C")) / s.str.len()
return df_feat.iloc[:, 1:]
优化方向:
局部GC含量: 计算siRNA序列不同片段的GC含量,分析每个片段的GC含量对沉默效率的影响。
特定位置GC含量: 对反义链中第2到第7个核苷酸和第8到第18个核苷酸的GC百分比进行特征提取。
能量谷特征: 提取siRNA中第9到第14个核苷酸区域的GC含量,作为能量谷特征。
代码如下:
def siRNA_feat_builder_gc(s, anti=False):
name = "anti" if anti else "sense"
df = s.to_frame()
GC_frac = (s.str.count("G") + s.str.count("C")) / s.str.len()
df[f"feat_siRNA_{name}_GC_in"] = (GC_frac >= 0.36) & (GC_frac <= 0.52)
GC_frac1 = (s.str[1:7].str.count("G") + s.str[1:7].str.count("C")) / s.str[1:7].str.len()
df[f"feat_siRNA_{name}_GC_in1"] = GC_frac1
GC_frac2 = (s.str[8:18].str.count("G") + s.str[8:18].str.count("C")) / s.str[8:18].str.len()
df[f"feat_siRNA_{name}_GC_in2"] = GC_frac2
return df.iloc[:, 1:]
1.3 编码修饰的核苷酸
概念: 对修饰的核苷酸进行不同于普通核苷酸的编码。
背景: 核苷酸的化学修饰可以提高siRNA的稳定性、特异性并减少毒性。
代码如下:
def siRNA_feat_builder_mod(s, anti=False):
name = "anti" if anti else "sense"
df = s.to_frame()
# 编码逻辑示例
for pos in [0, -1]:
for c in modified_nucleotides:
df[f"feat_siRNA_{name}_mod_{c}_{'front' if pos == 0 else 'back'}"] = s.str[pos] == c
return df.iloc[:, 1:]
优化方向:
复杂编码: 将修饰的化学性质(如2’-O-甲基,2’-MOE等)编码为特征。
位置特异性: 考虑修饰在序列中的具体位置对特征的影响。
多种修饰组合: 研究多种化学修饰的组合对siRNA效率的综合影响。
代码如下:
def siRNA_feat_builder_complex_mod(s, anti=False):
name = "anti" if anti else "sense"
df = s.to_frame()
for pos in [0, -1, 1, -2]:
for c in modified_nucleotides:
df[f"feat_siRNA_{name}_mod_{c}_{'front' if pos < 0 else 'back'}_{pos}"] = s.str[pos] == c
return df.iloc[:, 1:]
2. 模型优化
2.1 样本加权
概念: 对低mRNA剩余率的样本赋予更高的权重。
代码如下:
weight_ls = np.array(feats['mRNA_remaining_pct'].apply(lambda x: 2 if 0 <= x <= 30 else 1))
优化方向:
动态权重: 根据每轮训练后的误差调整权重。
多级权重: 不同区间设置不同的权重策略,精细化控制
2.2 自定义损失函数
概念: 使用结合MAE和F1得分的自定义评分函数。
代码如下:
from sklearn.metrics import mean_absolute_error, precision_score, recall_score
def calculate_metrics(preds, data, threshold=30):
y_true = data.get_label()
mae = mean_absolute_error(y_true, preds)
y_true_binary = ((y_true <= threshold) & (y_true >= 0)).astype(int)
y_pred_binary = ((preds <= threshold) & (preds >= 0)).astype(int)
precision = precision_score(y_true_binary, y_pred_binary, average="binary")
recall = recall_score(y_true_binary, y_pred_binary, average="binary")
f1 = 2 * precision * recall / (precision + recall) if precision + recall > 0 else 0
range_mae = mean_absolute_error(y_true[y_true_binary == 1], preds[y_true_binary == 1])
score = (1 - mae / 100) * 0.5 + (1 - range_mae / 100) * f1 * 0.5
return "custom_score", score, True
优化方向:
指标组合: 尝试不同的指标组合和权重,优化自定义得分函数。
动态阈值: 根据训练集和验证集的表现动态调整阈值。
2.3 自适应学习率
概念: 根据模型表现调整学习率,防止过拟合。
代码如下:
def adaptive_learning_rate(decay_rate=0.8, patience=50):
best_score = float("-inf")
wait = 0
def callback(env):
nonlocal best_score, wait
current_score = env.evaluation_result_list[-1][2]
if current_score > best_score:
best_score = current_score
else:
wait += 1
if wait >= patience:
new_lr = env.model.params['learning_rate'] * decay_rate
wait = 0
env.model.params['learning_rate'] = new_lr
return callback
优化方向:
学习率预热: 在训练初期逐步增加学习率,然后再进入自适应调整。
循环学习率: 实施学习率的周期性变化,帮助模型跳出局部最优解。
2.4 交叉验证和早停
概念: 使用K折交叉验证和早停防止过拟合,提高模型泛化能力。
代码如下:
from sklearn.model_selection import KFold
import lightgbm as lgb
def train(feats, n_original):
kf = KFold(n_splits=10, shuffle=True, random_state=42)
gbms = []
for train_idx, val_idx in kf.split(feats.iloc[:n_original, :]):
X_train, X_val = feats.iloc[train_idx, :-1], feats.iloc[val_idx, :-1]
y_train, y_val = feats.iloc[train_idx, -1], feats.iloc[val_idx, -1]
w_train = weight_ls[train_idx]
train_data = lgb.Dataset(X_train, label=y_train, weight=w_train)
val_data = lgb.Dataset(X_val, label=y_val, reference=train_data)
params = {
"boosting_type": "gbdt",
"objective": "regression",
"metric": "None",
"max_depth": 8,
"num_leaves": 63,
"min_data_in_leaf": 2,
"learning_rate": 0.05,
"feature_fraction": 0.9,
"lambda_l1": 0.1,
"lambda_l2": 0.2,
"verbose": -1,
"num_threads": 8,
}
adaptive_lr = adaptive_learning_rate(decay_rate=0.9, patience=1000)
gbm = lgb.train(
params,
train_data,
num_boost_round=25000,
valid_sets=[val_data],
feval=calculate_metrics,
callbacks=[adaptive_lr, lgb.log_evaluation(period=200, show_stdv=True), lgb.early_stopping(stopping_rounds=2500, first_metric_only=True, verbose=True)]
)
gbms.append(gbm)
return gbms
优化方向:
更细粒度的交叉验证: 增加折数或使用嵌套交叉验证,提高模型评估的稳定性。
动态早停: 根据验证集性能的动态变化调整早停策略,使模型训练更加灵活。
3. 高级方法和优化方向
3.1 超参数调优
概念: 使用网格搜索、随机搜索或贝叶斯优化找到最佳超参数组合,提高模型的预测性能。
网格搜索代码如下:
from sklearn.model_selection import GridSearchCV
from lightgbm import LGBMRegressor
param_grid = {
'max_depth': [3, 5, 7, 9],
'learning_rate': [0.01, 0.02, 0.05, 0.1],
'n_estimators': [100, 500, 1000, 2000],
'min_child_samples': [20, 30, 50, 100]
}
lgbm_model = LGBMRegressor(objective='regression', metric='rmse')
grid_search = GridSearchCV(estimator=lgbm_model, param_grid=param_grid, cv=5, n_jobs=-1, verbose=2)
grid_search.fit(X_train, y_train)
print("Best parameters found: ", grid_search.best_params_)
随机搜索代码如下:
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint as sp_randint
param_dist = {
'max_depth': sp_randint(3, 10),
'learning_rate': [0.01, 0.02, 0.05, 0.1],
'n_estimators': sp_randint(100, 2000),
'min_child_samples': sp_randint(20, 100)
}
random_search = RandomizedSearchCV(estimator=lgbm_model, param_distributions=param_dist, n_iter=100, cv=5, n_jobs=-1, verbose=2)
random_search.fit(X_train, y_train)
print("Best parameters found: ", random_search.best_params_)
贝叶斯优化代码如下:
import optuna
def objective(trial):
params = {
'max_depth': trial.suggest_int('max_depth', 3, 10),
'learning_rate': trial.suggest_loguniform('learning_rate', 1e-3, 1e-1),
'n_estimators': trial.suggest_int('n_estimators', 100, 2000),
'min_child_samples': trial.suggest_int('min_child_samples', 20, 100)
}
model = LGBMRegressor(**params)
model.fit(X_train, y_train)
return model.score(X_val, y_val)
study = optuna.create_study(direction='maximize')
study.optimize(objective, n_trials=100)
print('Best trial:')
trial = study.best_trial
print('Value: {}'.format(trial.value))
print('Params: ')
for key, value in trial.params.items():
print(' {}: {}'.format(key, value))
3.2 集成学习
概念: 使用集成学习的方法,通过结合多个模型的优势,提高整体预测性能。
Stacking代码如下:
from sklearn.model_selection import train_test_split
from sklearn.ensemble import StackingRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import lightgbm as lgb
from xgboost import XGBRegressor
from sklearn.neural_network import MLPRegressor
X = df.drop('target', axis=1) # 特征列
y = df['target'] # 目标列
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 基模型
estimators = [
('lgb', lgb.LGBMRegressor(objective='regression')),
('xgb', XGBRegressor(objective='reg:squarederror')),
('mlp', MLPRegressor(hidden_layer_sizes=(50, 30), max_iter=500))
]
# 最终的meta-regressor
final_estimator = LinearRegression()
# 创建Stacking模型
stacking_regressor = StackingRegressor(estimators=estimators, final_estimator=final_estimator, cv=5)
stacking_regressor.fit(X_train, y_train)
y_pred = stacking_regressor.predict(X_test)
# 模型评估
mse = mean_squared_error(y_test, y_pred)
print(f'Test MSE: {mse:.4f}')
# 分别观察每个单独模型的性能
for name, est in stacking_regressor.named_estimators_.items():
y_pred_individual = est.predict(X_test)
mse_individual = mean_squared_error(y_test, y_pred_individual)
print(f'{name} Test MSE: {mse_individual:.4f}')
3.3 混合学习
概念: 将深度学习与传统机器学习方法结合,利用各自优势提高模型性能。
深度学习特征提取与LightGBM结合代码如下:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
class RegressionCNN(nn.Module):
def __init__(self, sequence_length):
super(RegressionCNN, self).__init__()
self.conv1 = nn.Conv1d(in_channels=1, out_channels=32, kernel_size=3, stride=1)
self.relu = nn.ReLU()
self.pool = nn.MaxPool1d(kernel_size=2)
self.flatten = nn.Flatten()
self.fc = nn.Linear(32 * ((sequence_length // 2) - 1), 100)
self.regressor = nn.Linear(100, 1)
def forward(self, x):
x = self.conv1(x)
x = self.relu(x)
x = self.pool(x)
x = self.flatten(x)
x = self.fc(x)
x = self.regressor(x)
return x
sequence_length = 100 # 假定每个序列的长度
X = np.array([np.array(list(map(float, list(seq)))) for seq in df['sequence']])
X = X.reshape(X.shape[0], 1, sequence_length)
y = df['target'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
train_dataset = TensorDataset(torch.tensor(X_train, dtype=torch.float32), torch.tensor(y_train, dtype=torch.float32))
test_dataset = TensorDataset(torch.tensor(X_test, dtype=torch.float32), torch.tensor(y_test, dtype=torch.float32))
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
# RegressionCNN模型和Adam优化器
model = RegressionCNN(sequence_length)
optimizer = optim.Adam(model.parameters(), lr=0.001)
criterion = nn.MSELoss()
# 训练模型
def train_model(model, train_loader):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
loss = criterion(output.view(-1), target)
loss.backward()
optimizer.step()
# 提取特征
def extract_features(model, loader):
model.eval()
features = []
labels = []
with torch.no_grad():
for data, target in loader:
output = model(data)
features.extend(output.view(-1).numpy())
labels.extend(target.numpy())
return np.array(features), np.array(labels)
train_model(model, train_loader)
X_train_features, y_train = extract_features(model, train_loader)
X_test_features, y_test = extract_features(model, test_loader)
# 用LightGBM进行预测
lgb_regressor = lgb.LGBMRegressor(n_estimators=100, learning_rate=0.05, max_depth=5)
lgb_regressor.fit(X_train_features.reshape(-1, 1), y_train)
y_pred = lgb_regressor.predict(X_test_features.reshape(-1, 1))
mse = mean_squared_error(y_test, y_pred)
print(f"Test MSE: {mse:.4f}")
总结
尽管时间有限,只实现了部分内容,但我仍希望这些思路和方法能为读者的项目带来启发,进一步提高siRNA沉默效率预测的精准度。