假设我有一组随机的X,Y分:
x = np.array(range(0,50))
y = np.random.uniform(low=0.0, high=40.0, size=200)
y = map((lambda a: a[0] + a[1]), zip(x,y))
plt.scatter(x,y)
假设我使用线性回归对y的每个值将y建模为高斯,我如何估计posterior predictive,即x(y)的每个(可能)值的p(y | x)?
使用pymc或scikit-learn有没有直接的方法?
最佳答案 如果我理解你想要什么,你可以使用git版本的PyMC(PyMC3)和glm子模块来做到这一点.
例如
import numpy as np
import pymc as pm
import matplotlib.pyplot as plt
from pymc import glm
## Make some data
x = np.array(range(0,50))
y = np.random.uniform(low=0.0, high=40.0, size=50)
y = 2*x+y
## plt.scatter(x,y)
data = dict(x=x, y=y)
with pm.Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
pm.glm.glm('y ~ x', data)
step = pm.NUTS() # Instantiate MCMC sampling algorithm
trace = pm.sample(2000, step)
##fig = pm.traceplot(trace, lines={'alpha': 1, 'beta': 2, 'sigma': .5});## traces
fig = plt.figure()
ax = fig.add_subplot(111)
plt.scatter(x, y, label='data')
glm.plot_posterior_predictive(trace, samples=50, eval=x,
label='posterior predictive regression lines')
得到这样的东西
您应该会发现这些博客文章很有趣:
1和2从那里我采取了这些想法.
编辑
为了获得每个x的y值,请尝试从挖掘到glm源中获得的值.
lm = lambda x, sample: sample['Intercept'] + sample['x'] * x ## linear model
samples=50 ## Choose to be the same as in plot call
trace_det = np.empty([samples, len(x)]) ## initialise
for i, rand_loc in enumerate(np.random.randint(0, len(trace), samples)):
rand_sample = trace[rand_loc]
trace_det[i] = lm(x, rand_sample)
y = trace_det.T
y[0]
抱歉,如果它不是最优雅的 – 希望你能遵循逻辑.