随机过程 草稿笔记

Stochastic process

大部分来自wiki
虽然想着英文好理解一些, 但是自己写还是会有好多用错词 的啊(
就当latex练习好了

a mathematical object defined as a family of random variables.

Definitions

Stochastic process
  • a collection of random variables indexed by some set.
  • numerical values of some system randomly changing over time.
Random Function

a stochastic process can also be interpreted as a random element in a function space.

Random Field

If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead.

Discrete-time & Continuous-time Stochastic Processes

(When interpreted as time, ) The index set has a finite or countable number of elements or not.

State Space

where each random variable takes values from.

Discrete/Integer-valued SP
  • state space: integers or natural numbers
Real-valued SP
  • state space: real line
N-dimensional Vector Process
  • state space: n-d Euclidean space

Notation

probability space

( Ω , F , P ) (\Omega,F,P) (Ω,F,P)

where Ω \Omega Ω is a sample space, F F F is a σ \sigma σ-algebra, P P P is a probability measure.

measurable space

( S , Σ ) (S,\Sigma) (S,Σ)

while S S S is the state space.

stochastic process

{ X ( t ) : t ∈ T } \{X(t):t\in{T}\} {X(t):tT}

while X ( t ) X(t) X(t) refer to the random variable with the index t t t

T T T is called the index set or parameter set.

distribution function F t 1 , t 2 , ⋯   , t i ( x 1 , x 2 , ⋯   , x i ) F_{t_1, t_2, \cdots, t_i}(x_1,x_2,\cdots,x_i) Ft1,t2,,ti(x1,x2,,xi)

F t 1 , t 2 , ⋯   , t n ( x 1 , x 2 , ⋯   , x n ) = P { X ( t 1 ) ≤ x 1 , X ( t 2 ) ≤ x 2 , ⋯   , X ( t n ) ≤ x n } F_{t_1,t_2,\cdots,t_n}(x_1,x_2,\cdots,x_n) = P\{X(t_1)\leq x_1,X(t_2)\leq x_2,\cdots,X(t_n)\leq x_n\} Ft1,t2,,tn(x1,x2,,xn)=P{X(t1)x1,X(t2)x2,,X(tn)xn}

If the distribution is independent,
P { X ( t 1 ) ≤ x 1 , X ( t 2 ) ≤ x 2 } = P { X ( t 1 ) ≤ x 1 } P { X ( t 2 ) ≤ x 2 } P\{X(t_1)\leq x_1,X(t_2)\leq x_2\}=P\{X(t_1)\leq x_1\}P\{X(t_2)\leq x_2\} P{X(t1)x1,X(t2)x2}=P{X(t1)x1}P{X(t2)x2}

mean function m X ( t ) m_X(t) mX(t)

m X ( t ) = E X ( t ) , t ∈ T m_X(t)=EX(t), t\in T mX(t)=EX(t),tT

covariance function B X ( s , t ) B_X(s,t) BX(s,t)

B X ( s , t ) = E [ ( X ( s ) − m X ( s ) ) ( X ( t ) − m X ( t ) ) ] B_X(s,t) = E[(X(s)-m_X(s))(X(t)-m_X(t))] BX(s,t)=E[(X(s)mX(s))(X(t)mX(t))]

variance function D X ( t ) = B X ( t , t ) D_X(t)=B_X(t,t) DX(t)=BX(t,t)

D X ( t ) = σ X 2 ( t ) = E [ ( X ( t ) − m X ( t ) ) 2 ] = E X 2 ( t ) − m X ( t ) 2 = E X 2 ( t ) − ( E X ( t ) ) 2 D_X(t)=\sigma^2_X(t) = E[(X(t)-m_X(t))^2] = EX^2(t)-m_X(t)^2 = EX^2(t)-(EX(t))^2 DX(t)=σX2(t)=E[(X(t)mX(t))2]=EX2(t)mX(t)2=EX2(t)(EX(t))2

Correlation coefficient R X ( s , t ) R_X(s, t) RX(s,t)

R X ( s , t ) = E [ X ( s ) X ( t ) ] R_X(s,t)=E[X(s)X(t)] RX(s,t)=E[X(s)X(t)]

while m X ( t ) m_X(t) mX(t) is the mean value of X ( t ) X(t) X(t), D X ( t ) D_X(t) DX(t) is the offset of X ( t ) X(t) X(t) to mean value at time t t t ,

B X ( s , t ) B_X(s,t ) BX(s,t)& R X ( s , t ) R_X(s,t) RX(s,t) represents the relevance of SP { X ( t ) , t ∈ T } \{X(t), t\in T\} {X(t),tT} from different time s , t s,t s,t .

variance D ( x ) D(x) D(x)

D X = E X 2 − ( E X ) 2 DX=EX^2-(EX)^2 DX=EX2(EX)2

integral representation of E [ f ( X ) ] , X ∼ U ( 0 , T ) E[f(X)], X\sim U(0, T) E[f(X)],XU(0,T)

if X ∼ U ( 0 , T ) X\sim U(0, T) XU(0,T):
E [ f ( X ) ] = 1 T ∫ 0 T f ( x ) d x E[f(X)]=\frac{1}{T}\int_{0}^{T}f(x)dx E[f(X)]=T10Tf(x)dx
representing the mean value of every possible X in (0, T)

Process with Orthogonal Increments

stochastic process { X ( t ) , t ∈ T } \{X(t), t\in T \} {X(t),tT} ,

if E X ( t ) = 0 EX(t) =0 EX(t)=0, and t 1 &lt; t 2 ≤ t 3 &lt; t 4 ∈ T : E [ ( X ( t 2 ) − X ( t 1 ) ) ( X ( t 4 ) − X ( t 3 ) ‾ ) ] = 0 t_1 \lt t_2 \leq t_3 \lt t_4 \in T : E[(X(t_2)-X(t_1))\overline{(X(t_4)-X(t_3)})]=0 t1<t2t3<t4T:E[(X(t2)X(t1))(X(t4)X(t3))]=0,

X ( t ) , t ∈ T {X(t), t\in T} X(t),tT is a process with orthogonal increments.

Specially, if T = [ a , ∞ ) T=[a, \infty) T=[a,) and X ( a ) = 0 X(a)=0 X(a)=0,
B X ( s , t ) = R X ( s , t ) = σ X 2 ( m i n ( s , t ) ) B_X(s,t)=R_X(s,t)=\sigma_X^2(min(s,t)) BX(s,t)=RX(s,t)=σX2(min(s,t))

Normal distribution

N ( μ , σ 2 ) E X = μ , D X = σ 2 N(\mu,\sigma^2) \\EX=\mu, DX=\sigma^2 N(μ,σ2)EX=μ,DX=σ2

Specially, in standard Normal distribution,
μ = 0 , σ 2 = 1 \mu=0, \sigma^2=1 μ=0,σ2=1

Poisson process

It can be defined as a counting process, which represents the random number of events up to some time.
P { X ( t + s ) − X ( s ) = n } = e − λ t ( λ t ) n n ! P\{X(t+s)-X(s)=n\}=e^{-\lambda t} \frac{(\lambda t)^n}{n!} P{X(t+s)X(s)=n}=eλtn!(λt)n

  • has the natural numbers as its state space and the non-negative numbers as its index set.

let X ( t ) , t ≥ 0 {X(t), t \geq 0} X(t),t0 be a Poisson process, for $t,s \in [0,\infty) $ and s ≤ t s \le t st ,
E [ X ( t ) − X ( s ) ] = D [ X ( t ) − X ( s ) ] = λ ( t − s ) E[X(t)-X(s)]=D[X(t)-X(s)]=\lambda(t-s) E[X(t)X(s)]=D[X(t)X(s)]=λ(ts)
since X ( 0 ) = 0 X(0)=0 X(0)=0,
m X ( t ) = λ t σ x 2 ( t ) = λ t B X ( s , t ) = λ s m_X(t) = \lambda t \\ \sigma^2_x(t)=\lambda t \\ B_X(s,t)=\lambda s mX(t)=λtσx2(t)=λtBX(s,t)=λs
normally,
B X ( s , t ) = λ min ⁡ ( s , t ) B_X(s,t)=\lambda\min(s,t) BX(s,t)=λmin(s,t)

Poisson distribution

P ( λ ) P ( X = k ) = λ k k ! e − λ E X = D X = λ P(\lambda) \\ P(X=k)=\frac{\lambda^k}{k!}e^{-\lambda} \\ EX = DX = \lambda P(λ)P(X=k)=k!λkeλEX=DX=λ

Compound Poisson process

if { N ( t ) , t ≥ 0 } \{N(t), t \geq 0 \} {N(t),t0} is a Poisson process of λ \lambda λ,

{ Y k , k = 1 , 2 , ⋯ &ThinSpace; } \{Y_k, k=1,2,\cdots\} {Yk,k=1,2,} is a set of independent and identically distributed random variables,

and is independent to { N ( t ) , t ≥ 0 } \{N(t), t \geq 0\} {N(t),t0},
X ( t ) = ∑ k = 1 N ( t ) Y k ,    t ≥ 0 , X(t)=\sum_{k=1}^{N(t)}Y_k,\ \ t\geq0, X(t)=k=1N(t)Yk,  t0,
{ X ( t ) , t ≥ 0 } \{X(t),t\geq0\} {X(t),t0} is a Compound Poisson process.
E [ X ( t ) ] = λ t E ( Y 1 ) D [ X ( t ) ] = λ t E ( Y 1 ) 2 E[X(t)]=\lambda t E(Y_1) \\ D[X(t)]=\lambda t E(Y_1)^2 E[X(t)]=λtE(Y1)D[X(t)]=λtE(Y1)2

Markov Chain

probability transition matrix as
P = [ p i j ] P = [p_{ij}] P=[pij]
Two-step transition probability matrix as
P ( 2 ) = P P P^{(2)}=PP P(2)=PP

State classification of Markov chain

assume state space I = { 1 , 2 , ⋯ &ThinSpace; , 9 } I=\{1,2,\cdots ,9\} I={1,2,,9} ,

for state 1, the step T T T is the steps it takes to go back from state 1

for set { n : n ≥ 1 , p i i ( n ) &gt; 0 } \{n:n\geq1,p_{ii}^{(n)}\gt0\} {n:n1,pii(n)>0},
d = d ( i ) = G . C . D { n : p i i ( n ) &gt; 0 } d = d(i)=G.C.D\{n:p_{ii}^{(n)}&gt;0\} d=d(i)=G.C.D{n:pii(n)>0}
d d d is the cycle of state i i i,

if d &gt; 1 d\gt1 d>1, state i i i is periodic,

if d = 1 d=1 d=1, state i i i is aperiodic.
f i j = ∑ n = 1 ∞ f i j n f_{ij}=\sum_{n=1}^{\infty}f_{ij}^{n} fij=n=1fijn
f i j f_{ij} fij is the the probability that i can finally reach j,

when f i i = 1 f_{ii}=1 fii=1 , the state i i i is recurrent. The necessary and sufficient condition is
∑ n = 0 ∞ p i i ( n ) = ∞ \sum_{n=0}^{\infty}p_{ii}^{(n)}=\infty n=0pii(n)=
Specially, state i i i is ergodic state if it is aperiodic & recurrent.

stationary distribution

{ π j = ∑ i ∈ I π i p i j , ∑ j ∈ I π j = 1 , π j ≥ 0 , \begin{cases} \pi_j=\sum_{i \in I}\pi_i p_{ij} ,\\\\ \sum_{j \in I}\pi_j =1, \pi_j \geq 0, \end{cases} πj=iIπipij,jIπj=1,πj0,

the expected time
μ i = 1 π i \mu_i=\frac{1}{\pi_i} μi=πi1

The Birth Death process

{ X ( t ) , t ≥ 0 } \{X(t),t\geq 0\} {X(t),t0} is a Birth Death process when
{ p i , i + 1 ( h ) = λ i h + o ( h ) , λ i &gt; 0 , p i , i − 1 ( h ) = μ i h + o ( h ) , μ i &gt; 0 , μ 0 = 0 , p i i ( h ) = 1 − λ i h − μ i h + o ( h ) , p i j ( h ) = o ( h ) , ∣ i − j ∣ ≥ 2 , \begin{cases} p_{i,i+1}(h)=\lambda_ih+o(h),&amp;\lambda_i&gt;0,\\ p_{i,i-1}(h)=\mu_ih+o(h),&amp;\mu_i&gt;0,\mu_0=0,\\ p_{ii}(h)=1-\lambda_ih-\mu_ih+o(h),\\ p_{ij}(h)=o(h), &amp;|i-j|\geq2, \end{cases} pi,i+1(h)=λih+o(h),pi,i1(h)=μih+o(h),pii(h)=1λihμih+o(h),pij(h)=o(h),λi>0,μi>0,μ0=0,ij2,

Kolmogorov forward equation

p i j ′ ( t ) = λ j − 1 p i , j − 1 ( t ) − ( λ j + μ j ) p i j ( t ) + μ j + 1 p i , j + 1 ( t ) ,      i , j ∈ I p&#x27;_{ij}(t)=\lambda_{j-1}p_{i,j-1}(t)-(\lambda_j+\mu_j)p_{ij}(t)+\mu_{j+1}p_{i,j+1}(t),\ \ \ \ i,j \in I pij(t)=λj1pi,j1(t)(λj+μj)pij(t)+μj+1pi,j+1(t),    i,jI

希望不会挂科吧。。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值