自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(0)
  • 资源 (3)
  • 收藏
  • 关注

空空如也

General loss functions

General loss functions Building off of our interpretations of supervised learning as (1) choosing a representation for our problem, (2) choosing a loss function, and (3) minimizing the loss, let us consider a slightly more general formulation for supervised learning. In the supervised learning settings we have considered thus far, we have input data x ∈ Rn and targets y from a space Y. In linear regression, this corresponded to y ∈ R, that is, Y = R, for logistic regression and other binary classification problems, we had y ∈ Y = {−1, 1}, and for multiclass classification we had y ∈ Y = {1, 2, . . . , k} for some number k of classes.

2018-08-11

Hidden Markov Models Fundamentals

Abstract How can we apply machine learning to data that is represented as a sequence of observations over time? For instance, we might be interested in discovering the sequence of words that someone spoke based on an audio recording of their speech. Or we might be interested in annotating a sequence

2018-08-11

Gaussian processes

Gaussian processes介绍 讲义 1. solve a convex optimization problem in order to identify the single “best fit” model for the data, and 2. use this estimated model to make “best guess” predictions for future test input points

2018-08-11

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除