model.fit() fit 函数:
https://blog.csdn.net/a1111h/article/details/82148497
数据标准化关于 fit(),transform(), fit_transform():
https://blog.csdn.net/Michelle_sky/article/details/79644645?utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7Edefault-7.control&dist_request_id=1330147.8275.16180431929810889&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7Edefault-7.control
sklearn.linear_model.LinearRegression() 线性回归:
https://lcqbit11.blog.csdn.net/article/details/70196159
sklearn.svm 支持向量机:
https://blog.csdn.net/qq_41577045/article/details/79859902
https://www.jianshu.com/p/a9f9954355b3?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation
https://blog.csdn.net/zcc_0015/article/details/52151654
https://zhuanlan.zhihu.com/p/39780508
RandomForestClassifier 随机森林:
https://blog.csdn.net/w952470866/article/details/78987265/
Feature importances 特征重要性:
https://blog.csdn.net/sunmingyang1987/article/details/103100654
对标签进行 one-hot 编码:
keras.utils.to_categorical方法
https://blog.csdn.net/qian2213762498/article/details/86584335
le = LabelEncoder() #使用LabelEncoder对特征进行硬编码 from sklearn.preprocessing import LabelEncoder
le.fit(sdss_df['class']) # 即将离散型的种类数据转换成 0 到 n−1 之间的数 通过fit函数传入需要编码的数据,在内部生成对应的key-value
encoded_Y = le.transform(sdss_df['class']) # 用于需要转化的数据,用transform函数
onehot_labels = np_utils.to_categorical(encoded_Y) # keras.utils.to_categorical 将类别标签转为onehot编码
y_train = onehot_labels[:train_count]
y_validation = onehot_labels[train_count:train_count+val_count]
y_test = onehot_labels[-test_count:]
神经网络搭建 Keras–序贯模型(sequential):
单输入单输出,一条路通到底,层与层之间只有相邻关系,没有跨层连接。这种模型编译速度快,操作也比较简单
https://blog.csdn.net/rosefun96/article/details/110005568
https://blog.csdn.net/zjw642337320/article/details/81204560
Keras网络层搭建 keras.layers.Dense 方法:
https://blog.csdn.net/weixin_42499236/article/details/84624195Keras神经网络评价指标
categorical_accuracy和 sparse_categorical_accuracy :
https://blog.csdn.net/qq_20011607/article/details/89213908keras 调整学习率 ReduceLROnPlateau() :
https://blog.csdn.net/weixin_43593330/article/details/107675538https://www.freesion.com/article/4104548785/
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor=‘val_loss’, factor=0.5, patience=2, verbose=1)
monitor:监测的值,可以是accuracy,val_loss,val_accuracy
factor:缩放学习率的值,学习率将以lr = lr*factor的形式被减少
patience:当patience个epoch过去而模型性能不提升时,学习率减少的动作会被触发
mode:‘auto’,‘min’,‘max’之一 默认‘auto’就行
epsilon:阈值,用来确定是否进入检测值的“平原区”
cooldown:学习率减少后,会经过cooldown个epoch才重新进行正常操作
min_lr:学习率最小值,能缩小到的下限
verbose: 详细信息模式,0 或者 1 。
Reduce=ReduceLROnPlateau(monitor='val_accuracy',
factor=0.1,
patience=2,
verbose=1,
mode='auto',
epsilon=0.0001,
cooldown=0,
min_lr=0)
keras.optimizers.Adam() 优化器:
https://blog.csdn.net/u013249853/article/details/105875694/https://zhuanlan.zhihu.com/p/86261902?from_voters_page=true
Keras,model.compile() :
https://blog.csdn.net/sinat_16388393/article/details/93207842
keras model.compile(loss='目标函数 ', optimizer='adam', metrics=['accuracy'])
Pytorch 划分数据集的方法:torch.utils.data:
https://www.cnblogs.com/Bella2017/p/11791216.htmlPytorch 数字矩阵操作:
https://www.jianshu.com/p/d678c5e44a6bPytorch GPU/CPU配置:
http://www.zzvips.com/article/62791.htmlPytorch 优化器 torch.optim.SGD:
https://blog.csdn.net/qq_34690929/article/details/79932416 https://www.cnblogs.com/peixu/p/13194328.htmlPytorch model.eval 测试模型:
https://blog.csdn.net/iammelon/article/details/89928531 https://www.cnblogs.com/luckyplj/p/13424561.htmlkeras Reshape改变矩阵形状:
https://zhuanlan.zhihu.com/p/83993002
https://blog.csdn.net/moshiyaofei/article/details/87888451
使用keras.layers.Reshape实现不同维度任意层之间的对接
from keras.models import Sequential
from keras.layers import Reshape
model = Sequential()
# 改变数据形状为3行4列
# 模型的第1层必须指定输入的维度,注意不需要指定batch的大小
model.add(Reshape((3, 4), input_shape=(12, )))
# 改变数据形状为6行2列
model.add(Reshape((6, 2)))
# 改变数据形状为 第2,3维为(2,2),根据数据元素数量自动确定第1维大小为3
model.add(Reshape((-1, 2, 2)))
# 改变数据形状为 第1,2维为(2,2),根据数据元素数量自动确定第3维大小为3
model.add(Reshape((2, 2, -1)))
model.summary()