- Reminder of key notions. High dimensional parameter space means : p > > n p>>n p>>n. Sparsity: the parameter vector has many zero elements. It means the parameter resides in a subspace. When the subspace dimension is smaller than n n n, the parameter is estimable.
- In a sparse high-dimensional (SHD) problem, one does not know the location of the zeros; otherwise the parameter can be directly cast to the lower dimensional subspace.
- Penalized log-likelihood can be interpreted as a posterior log density. The ridge log-likelihood is interpreted as the result of using a normal prior; lasso a Laplace prior. The key insight of this method is to use a zero-inflated prior to shrink noise and a fat-tailed prior to preserve signal.
- Scale mixture of normals. X = Y σ X=Y\sigma X=Yσ where Y Y Y is a standard normal and σ \sigma σ is a continuous random variable on ( 0 , ∞ ) (0,\infty) (0,∞). [West 1987 Biometrika paper]. A lot of well-known distributions are in this family: t, logistic, Laplace, and obviously, the instantaneous distribution generated by the stochastic vol Brownian motion in finance.
note: weak, sparse, high-dimensional signal
最新推荐文章于 2024-05-18 12:43:38 发布