Stirling’s formula
n
!
∼
2
π
n
(
n
/
e
)
n
n! \sim \sqrt{2\pi n}(n/e)^n
n!∼2πn(n/e)n
∼
\sim
∼ : asymptotically equivalent 渐进相等
Sums of powers of integers 整数k次幂的和
令
S
n
(
k
)
=
1
k
+
2
k
+
.
.
.
+
n
k
S_n^{(k)} = 1^k+2^k + ...+n^k
Sn(k)=1k+2k+...+nk
则
S
n
(
k
)
∼
n
k
+
1
k
+
1
S_n^{(k)}\sim \frac{n^{k+1}}{k+1}
Sn(k)∼k+1nk+1
The exponential limit 指数极限
对任意有限数
c
c
c, 均有
(
1
+
c
n
)
n
→
e
c
a
s
n
→
∞
.
(1+\frac{c}{n})^n\rightarrow e^c \quad as \quad n\rightarrow \infty.
(1+nc)n→ecasn→∞.
Bonferroni inequalities
Boole’s inequality may be generalized to find upper and lower bounds on the probability of finite unions of events. These bounds are known as Bonferroni inequalities, after Carlo Emilio Bonferroni, see Bonferroni (1936).
Define
S
1
=
∑
i
=
1
n
P
(
A
i
)
,
\displaystyle S_{1}=\sum _{i=1}^{n}{\mathbb {P} }(A_{i}),
S1=i=1∑nP(Ai),
and
S
2
=
∑
1
≤
i
<
j
≤
n
P
(
A
i
∩
A
j
)
,
{\displaystyle S_{2}=\sum _{1\leq i<j\leq n}{\mathbb {P} }\left(A_{i}\cap A_{j}\right),}
S2=1≤i<j≤n∑P(Ai∩Aj),
as well as
S
k
=
∑
1
≤
i
1
<
⋯
<
i
k
≤
n
P
(
A
i
1
∩
⋯
∩
A
i
k
)
{\displaystyle S_{k}=\sum _{1\leq i_{1}<\cdots <i_{k}\leq n}{\mathbb {P} }\left(A_{i_{1}}\cap \cdots \cap A_{i_{k}}\right)}
Sk=1≤i1<⋯<ik≤n∑P(Ai1∩⋯∩Aik)
for all integers
k
k
k in {3, …, n}.
Then, for odd
k
k
k in {1, …, n},
P
(
⋃
i
=
1
n
A
i
)
≤
∑
j
=
1
k
(
−
1
)
j
−
1
S
j
,
{\displaystyle {\mathbb {P} }\left(\bigcup _{i=1}^{n}A_{i}\right)\leq \sum _{j=1}^{k}(-1)^{j-1}S_{j},}
P(i=1⋃nAi)≤j=1∑k(−1)j−1Sj,
and for even
k
k
k in {2, …, n},
P
(
⋃
i
=
1
n
A
i
)
≥
∑
j
=
1
k
(
−
1
)
j
−
1
S
j
.
{\displaystyle {\mathbb {P} }\left(\bigcup _{i=1}^{n}A_{i}\right)\geq \sum _{j=1}^{k}(-1)^{j-1}S_{j}.}
P(i=1⋃nAi)≥j=1∑k(−1)j−1Sj.
Boole’s inequality is recovered by setting
k
=
1
k = 1
k=1. When
k
=
n
k = n
k=n, then equality holds and the resulting identity is the inclusion–exclusion principle.
Chebyshev’s inequality(切比雪夫不等式)
Chebyshev’s inequality is usually stated for random variables, but can be generalized to a statement about measure spaces.
(1)Probabilistic statement
Let
X
X
X (integrable) be a random variable with finite expected value
μ
μ
μ and finite non-zero variance
σ
2
σ^2
σ2. Then for any real number
k
>
0
k > 0
k>0,
Pr
(
∣
X
−
μ
∣
≥
k
σ
)
≤
1
k
2
.
\displaystyle \Pr(|X-\mu |\geq k\sigma )\leq {\frac {1}{k^{2}}}.
Pr(∣X−μ∣≥kσ)≤k21. Only the case
k
>
1
{\displaystyle k>1}
k>1 is useful. When
k
≤
1
{\displaystyle k\leq 1}
k≤1the right hand side
1
k
2
≥
1
{\displaystyle {\frac {1}{k^{2}}}\geq 1}
k21≥1 and the inequality is trivial as all probabilities are
≤
1
≤ 1
≤1.
(2)Measure-theoretic statement(测量理论陈述)
Let $(X, Σ, μ) $be a measure space, and let
f
f
f be an extended real-valued measurable function defined on
X
X
X. Then for any real number
t
>
0
t > 0
t>0 and
0
<
p
<
∞
0 < p < ∞
0<p<∞,
μ
(
{
x
∈
X
 
:
  
∣
f
(
x
)
∣
≥
t
}
)
≤
1
t
p
∫
∣
f
∣
≥
t
∣
f
∣
p
 
d
μ
.
{\displaystyle \mu (\{x\in X\,:\,\,|f(x)|\geq t\})\leq {1 \over t^{p}}\int _{|f|\geq t}|f|^{p}\,d\mu .}
μ({x∈X:∣f(x)∣≥t})≤tp1∫∣f∣≥t∣f∣pdμ.
More generally, if
g
g
g is an extended real-valued measurable function, nonnegative and nondecreasing on the range of
f
f
f, then[citation needed]
μ
(
{
x
∈
X
 
:
  
f
(
x
)
≥
t
}
)
≤
1
g
(
t
)
∫
X
g
∘
f
 
d
μ
.
μ
(
{
x
∈
X
 
:
  
f
(
x
)
≥
t
}
)
≤
1
g
(
t
)
∫
X
g
∘
f
 
d
μ
{\displaystyle \mu (\{x\in X\,:\,\,f(x)\geq t\})\leq {1 \over g(t)}\int _{X}g\circ f\,d\mu .} \mu (\{x\in X\,:\,\,f(x)\geq t\})\leq {1 \over g(t)}\int _{X}g\circ f\,d\mu
μ({x∈X:f(x)≥t})≤g(t)1∫Xg∘fdμ.μ({x∈X:f(x)≥t})≤g(t)1∫Xg∘fdμ.
The previous statement then follows by defining
g
(
x
)
{\displaystyle g(x)}
g(x) as
∣
x
∣
p
\displaystyle |x|^{p}
∣x∣pif
x
≥
t
\displaystyle x\geq t
x≥t and
0
\displaystyle 0
0 otherwise.
Triangle inequality(三角不等式)
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side.(两边之和大于或等于第三边。)
If x,y,z are the lengths of the three sides of a triangle, then
z
≤
x
+
y
.
{\displaystyle z\leq x+y}.
z≤x+y.
In Euclidean geometry and some other geometries, the triangle inequality is a theorem about distances, and it is written using vectors(向量) and vector lengths (norms):
∥
x
+
y
∥
≤
∥
x
∥
+
∥
y
∥
{\displaystyle \|\mathbf {x} +\mathbf {y} \|\leq \|\mathbf {x} \|+\|\mathbf {y} \|}
∥x+y∥≤∥x∥+∥y∥
巴塞尔问题
1
+
1
2
2
+
1
3
2
+
.
.
.
=
∑
t
=
1
∞
1
t
2
=
π
2
6
1+\frac{1}{2^2}+\frac{1}{3^2}+...=\sum_{t=1}^{\infty}\frac{1}{t^2}=\frac{\pi^2}{6}
1+221+321+...=t=1∑∞t21=6π2
Cauchy–Schwarz inequality
the inequality is written as
∣
⟨
u
,
v
⟩
∣
≤
∥
u
∥
∥
v
∥
.
{\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |\leq \|\mathbf {u} \|\|\mathbf {v} \|.}
∣⟨u,v⟩∣≤∥u∥∥v∥.
Moreover, the two sides are equal if and only if
u
{\displaystyle \mathbf {u} }
u and
v
{\displaystyle \mathbf {v} }
v are linearly dependent (meaning they are parallel: one of the vector’s magnitudes is zero, or one is a scalar multiple of the other)