I am receiving a ValueError when using integration, but I cannot understand why. Here is my simplified code:
import numpy as np
import scipy.integrate as integrate
pbar = 1
p = np.arange(0,pbar,pbar/1000)
h = lambda p: p**2/2+p*(1-p)
Kl = lambda p: h(p) +0.02
K = Kl(p)
R = 0.5*h(p) + 0.5*h(pbar)
Vl = lambda p: np.minimum.reduce([p, K, R])
integrate.quad(Vl, 0, pbar)[0]
Vl is the element-wise minimum of the three arrays. The last line gives the exception:
ValueError: setting an array element with a sequence.
Can someone please explain the error and propose an alternative way of doing this integration?
解决方案
You have a bunch of 1000 element arrays:
In [8]: p.shape
Out[8]: (1000,)
In [9]: K.shape
Out[9]: (1000,)
In [10]: R.shape
Out[10]: (1000,)
In [11]: np.minimum.reduce([p, K, R]).shape
Out[11]: (1000,)
In [12]: Vl(p).shape
Out[12]: (1000,)
In [8]: p.shape
Out[8]: (1000,)
In [9]: K.shape
Out[9]: (1000,)
In [10]: R.shape
Out[10]: (1000,)
In [11]: np.minimum.reduce([p, K, R]).shape
Out[11]: (1000,)
In [12]: Vl(p).shape
Out[12]: (1000,)
But integrate.quad is calling Vl with a scalar, an integration variable rangine from 0 to pbar. The nature of the integration is to evaluate Vl at a bunch of points, and sum the values appropriately.
Vl(0) produces this error because it is
In [15]: np.minimum.reduce([0, K, R])
ValueError: setting an array element with a sequence.
So you need to change Vl to work with a scalar p, or perform your sum directly on the array.
Writing
Vl = lambda x: np.minimum.reduce([x, K, R])
might have clued you into the difference. Vl does not work with x different from the global p. K and R are globals, x is local to the lambda.