MachineLearning
GuokLiu
享受当下时光,专注做好一件事
展开
-
220315-PyTorch中target为浮点数float时的交叉熵loss计算
Pytorch默认的交叉熵函数使用loss=(pred=浮点数, target=整数)的形式# Example of target with class indicesloss = nn.CrossEntropyLoss()input = torch.randn(3, 5, requires_grad=True)target = torch.empty(3, dtype=torch.long).random_(5)output = loss(input, target)output.back.原创 2022-03-15 14:03:55 · 2291 阅读 · 0 评论 -
220114-模型预测输出置信度的可视化
import pandas as pddf = pd.DataFrame(Pre.numpy())df['Cfd'] = np.max(Pre.numpy(), axis=1)df['Cfd'] = (df['Cfd'] - df['Cfd'].min())/(df['Cfd'].max() - df['Cfd'].min())df['Pre'] = np.argmax(Pre.numpy(), axis=1)df['Tar'] = target_Ydf['Tag'] = [1 if p el原创 2022-01-14 11:39:27 · 865 阅读 · 0 评论 -
210817-IRLS迭代最小二乘法
Code 1:from numpy import array, diag, dot, maximum, empty, repeat, ones, sumfrom numpy.linalg import invdef IRLS(y, X, maxiter, w_init = 1, d = 0.0001, tolerance = 0.001): n,p = X.shape delta = array( repeat(d, n) ).reshape(1,n) w = repeat(1, n)..原创 2021-08-17 17:35:26 · 3384 阅读 · 3 评论 -
180405 单层卷积网络的参数计算
网易云课堂网易云课堂资源 卷积网络总结 卷积网络举例 卷积神经网络例子(模型参数的选择可参考别人类似问题的参数设置) 卷积神经网络参数规律 池化总结 相同数据的输入输出,卷积网络相比全连接网络可大幅度减少模型参数 卷积网络效果好的三个原因 网络图片,如侵权,请联系删除1 网络图片,如侵权,请联系删除2 ...原创 2018-04-05 19:48:48 · 335 阅读 · 0 评论 -
180415 机器学习与深度学习课程资源汇总 (不断收集更新中ing~)
【2016-机器学习】 Coursera https://www.coursera.org/learn/machine-learning 网易云 http://study.163.com/course/introduction/1004570029.htm【2017-深度学习计算机视觉】 网易云 study.163.com/course/introduction/100322300...原创 2018-04-15 17:18:08 · 373 阅读 · 0 评论 -
180311 四种常见的激活函数的Python绘制
# -*- coding: utf-8 -*-"""Created on Sun Mar 11 20:41:57 2018@author: brucelau"""import matplotlib.pyplot as pltimport numpy as npx = np.linspace(-10,10)y_sigmoid = 1/(1+np.exp(-x))y_tan...原创 2018-03-11 20:59:20 · 11179 阅读 · 4 评论 -
180310 不同信噪比下有无Batch-Normalization的模型训练效果对比
【干货】Batch Normalization: 如何更快地训练深度神经网络 reuse=2 则使用Batch-Normlizationreuse=3 不使用Batch-Normlization从上图分析可以BN可以加快网络训练的速度#!/usr/bin/env python3# -*- coding: utf-8 -*-"""Created on Tue Feb 20 ...原创 2018-03-10 20:28:54 · 1089 阅读 · 0 评论 -
180128 SVD分解的几何解释(待整理)
What is an intuitive explanation of singular value decomposition (SVD)?Singular Value Decomposition | Stanford UniversityA geometrical interpretation of the SVDHow to Read a Research Paper原创 2018-01-28 15:45:08 · 369 阅读 · 0 评论 -
180126 论文Probabilistic Principal Component Analysis的Python实现
论文下载 Probabilistic Principal Component Analysis Matlab代码和Jupyter代码 Github-Matlab Github-Jupyter Notebook代码1- 用于PCA的EM算法# -*- coding: utf-8 -*-"""Created on Thu Jan 25 22:06:32 2018@author: brucel原创 2018-01-26 15:05:44 · 1333 阅读 · 2 评论 -
180114 Sklearn.GaussianMixture中的convaiance_type说明
协方差矩阵的几何解释sklearn.GaussianMixturecovariance_type : {‘full’, ‘tied’, ‘diag’, ‘spherical’}, ‘full’ (each component has its own general covariance matrix), ‘tied’ (all components share the sam原创 2018-01-14 10:47:02 · 2378 阅读 · 0 评论 -
170904 Training Deep Neural Networks on Noisy Labels with Bootstrapping-Notes(TBC)
References What is the difference between the detection, recognition and identification of things? A Friendly Introduction to Cross-Entropy Loss The cross-entropy error function in neural networks原创 2017-09-04 20:42:01 · 1450 阅读 · 1 评论 -
170627 GAN for Beginners
We-Chat Link Github Code BaiduCloud Code Oreilly Link Tips and Tricks to make GAN works原创 2017-06-27 20:29:27 · 353 阅读 · 0 评论 -
Python机器学习: Support Vector Machines 01 sklearn_note_26.1_26.2
Ref: Scikit Learn26.1 Support Vector Regression (SVR) using linear and non-linearkernels"""===================================================================Support Vector Regression (SVR) using lin翻译 2017-02-17 18:21:48 · 1157 阅读 · 0 评论