- 博客(0)
- 资源 (3)
- 收藏
- 关注
General loss functions
General loss functions
Building off of our interpretations of supervised learning as (1) choosing a representation for our problem, (2) choosing a loss function, and (3) minimizing the loss, let us consider a slightly more general formulation for supervised learning. In the supervised learning settings we have considered thus far, we have input data x ∈ Rn and targets y from a space Y. In linear regression, this corresponded to y ∈ R, that is, Y = R, for logistic regression and other
binary classification problems, we had y ∈ Y = {−1, 1}, and for multiclass classification we had y ∈ Y = {1, 2, . . . , k} for some number k of classes.
2018-08-11
Hidden Markov Models Fundamentals
Abstract
How can we apply machine learning to data that is represented as a
sequence of observations over time? For instance, we might be interested in discovering the sequence of words that someone spoke based on an audio recording of their speech. Or we might be interested in annotating a sequence
2018-08-11
Gaussian processes
Gaussian processes介绍 讲义
1. solve a convex optimization problem in order to identify the single “best fit” model for the data, and
2. use this estimated model to make “best guess” predictions for future test input points
2018-08-11
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人