Chernoff bound(chernoff-hoeffding bound)

Chernoff <wbr>bound

Chernoff <wbr>bound

Chernoff <wbr>bound


In probability theory, the Chernoff bound, named after Herman Chernoff, gives exponentially decreasing bounds on tail distributions of sums of independent random variables. It is a sharper bound than the known first or second moment based tail bounds such as Markov's inequality or Chebyshev inequality, which only yield power-law bounds on tail decay. However, the Chernoff bound requires that the variates be independent - a condition that neither the Markov nor the Chebyshev inequalities require.

It is related to the (historically earliest) Bernstein inequalities, and to Hoeffding's inequality.

 

Definition

Let X1, ..., Xn be independent Bernoulli random variables, each having probability p > 1/2. Then the probability of simultaneous occurrence of more than n/2 of the events \{X_k = 1\} has an exact value S, where

 S=\sum\limits_{i = \lfloor \frac{n}{2} \rfloor + 1}^n \binom{n}{i}p^i (1 - p)^{n - i} .这是连续多于n/2次选取一个事件的概率,[0,1].

The Chernoff bound shows that S has the following lower bound:

 S \ge 1 - \mathrm{e}^{- \frac{1}{2p}n \left( {p - \frac{1}{2}} \right)^2} .
之上的S表示概率.
之下的S表示事件发生的总次数.
上面的式子就是
 

Indeed, noticing that \mu=np, we get by the multiplicative form of Chernoff bound (see below or Corollary 13.3 in Sinclair's class notes),

P\left[S\le\frac{n}{2}\right]=P\left[S\le\left(1-\left(1-\frac{1}{2p}\right)\right)\mu\right]\leq e^{-\frac{\mu}{2}\left(1-\frac{1}{2p}\right)^2}=e^{-\frac{1}{2p}n\left(p-\frac{1}{2}\right)^2}

This result admits various generalizations as outlined below. One can encounter many flavours of Chernoff bounds: the original additive form (which gives a bound on the absolute error) or the more practical multiplicative form (which bounds the error relative to the mean).

[edit]A motivating example

Chernoff bound.png

The simplest case of Chernoff bounds is used to bound the success probability of majority agreement for n independent, equally likely events.

A simple motivating example is to consider a biased coin. One side (say, Heads), is more likely to come up than the other, but you don't know which and would like to find out. The obvious solution is to flip it many times and then choose the side that comes up the most. But how many times do you have to flip it to be confident that you've chosen correctly?

In our example, let X_i denote the event that the ith coin flip comes up Heads; suppose that we want to ensure we choose the wrong side with at most a small probability ε. Then, rearranging the above, we must have:

 n \geq \frac{1}{(p -1/2)^2} \ln \frac{1}{\sqrt{\varepsilon}}.

If the coin is noticeably biased, say coming up on one side 60% of the time (p = .6), then we can guess that side with 95% (\epsilon = .05) accuracy after 150 flips(n=150). If it is 90% biased, then a mere 10 flips suffices. If the coin is only biased a tiny amount, like most real coins are, the number of necessary flips becomes much larger.

More practically, the Chernoff bound is used in randomized algorithms (or in computational devices such as quantum computers) to determine a bound on the number of runs necessary to determine a value by majority agreement, up to a specified probability. For example, suppose an algorithm (or machine) A computes the correct value of a function f with probability p > 1/2. If we choose nsatisfying the inequality above, the probability that a majority exists and is equal to the correct value is at least 1 − ε, which for small enough ε is quite reliable. If p is a constant, ε diminishes exponentially with growing n, which is what makes algorithms in the complexity class BPP efficient.

Notice that if p is very close to 1/2, the necessary n can become very large. For example, if p = 1/2 + 1/2m, as it might be in some PPalgorithms, the result is that n is bounded below by an exponential function in m:

 n \geq 2^{2m} \ln \frac{1}{\sqrt{\varepsilon}}.

The first step in the proof of Chernoff bounds

The Chernoff bound for a random variable X, which is the sum of n independent random variables X_1, X_2, ..., X_n, is obtained by applying etX for some well-chosen value of t. This method was first applied by Sergei Bernstein to prove the related Bernstein inequalities.

From Markov's inequality and using independence we can derive the following useful inequality:

For any t > 0,

\Pr\left[X \ge a\right] = \Pr\left[e^{tX} \ge e^{ta}\right] \le \frac{ E[e^{tX}]}{e^{ta}} = {\prod_i E[e^{tX_i}]\over e^{ta}}.

In particular optimizing over t and using independence we obtain,

 \Pr\left[X \ge a\right] \leq \min_{t>0} {\prod_i E[e^{tX_i}] \over e^{ta}}.

 

 

 

 

 

(1)

Similarly,

\Pr\left[X \le a\right] = \Pr\left[e^{-tX} \ge e^{-ta}\right]

and so,

 \Pr\left[X \le a\right] \leq \min_{t>0} e^{ta} \prod_i E[e^{-tX_i}] .

[edit]Precise statements and proofs

[edit]Theorem for additive form (absolute error)

The following Theorem is due to Wassily Hoeffding and hence is called Chernoff-Hoeffding theorem.

Assume random variables X_1, X_2, \ldots, X_m are i.i.d. Let p = E \left [X_i \right ], X_i \in \{0,1\}, and \varepsilon > 0. Then

 \begin{align} &\Pr\left[ \frac 1 m \sum X_i \geq p + \varepsilon \right] \ &\qquad\leq \left ( {\left (\frac{p}{p + \varepsilon}\right )}^{p+\varepsilon} {\left (\frac{1 - p}{1 -p - \varepsilon}\right )}^{1 - p- \varepsilon}\right ) ^m = e^{ - D(p+\varepsilon\|p) m} \end{align}

and

 \begin{align} &\Pr\left[ \frac 1 m \sum X_i \leq p - \varepsilon \right] \ &\qquad\leq \left ( {\left (\frac{p}{p - \varepsilon}\right )}^{p-\varepsilon} {\left (\frac{1 - p}{1 -p + \varepsilon}\right )}^{1 - p+ \varepsilon}\right ) ^m = e^{ - D(p-\varepsilon\|p) m}, \end{align}

where

 D(x||y) = x \log \frac{x}{y} + (1-x) \log \frac{1-x}{1-y}

is the Kullback-Leibler divergence between Bernoulli distributed random variables with parameters x and y respectively. If  p\geq 1/2 , then  \Pr\left[ X>mp+x \right] \leq \exp(-x^2/2mp(1-p)) .

[edit]Proof

The proof starts from the general inequality (1) above. q = p + \varepsilon. Taking a = mq in (1), we obtain:

 \Pr\left[ \frac{1}{m} \sum X_i \ge q\right] \le \inf_{t>0} \frac{E \left[\prod e^{t X_i}\right]}{e^{tmq}} = \inf_{t>0} \left[\frac{ E\left[e^{tX_i} \right] }{e^{tq}}\right]^m .

Now, knowing that \Pr[X_i = 1] = p, \Pr[X_i = 0] = (1-p), we have

 \left[\frac{ E\left[e^{tX_i} \right] }{e^{tq}}\right]^m = \left[\frac{p e^t + (1-p)}{e^{tq} }\right]^m = [pe^{(1-q)t} + (1-p)e^{-qt}]^m.

Therefore we can easily compute the infimum, using calculus and some logarithms. Thus,

 \begin{align} &\frac{d}{dt} \log(pe^{(1-q)t} + (1-p)e^{-qt}) \ &\qquad= \frac{1}{pe^{(1-q)t} + (1-p)e^{-qt}} ((1-q)pe^{(1-q)t}-q(1-p)e^{-qt}) \ &\qquad = -q + \frac{pe^{(1-q)t}}{pe^{(1-q)t}+(1-p)e^{-qt}} \end{align}

Setting the last equation to zero and solving, we have

 \begin{align} q & = \frac{pe^{(1-q)t}}{pe^{(1-q)t}+(1-p)e^{-qt}} = \frac{pe^{(1-q)t}}{e^{-qt}(pe^{t}+(1-p))} \ pe^{(1-q)t} & = pe^{-qt}e^t = qe^{-qt}(pe^{t}+1-p) \ \frac{p}{q}e^t & = pe^t + 1-p \end{align}

so that e^t = (1-p)\left(\frac{p}{q}-p\right)^{-1}.

Thus, t = \log\left(\frac{(1-p)q}{(1-q)p}\right).

As q = p+\varepsilon > p, we see that t > 0, so our bound is satisfied on t. Having solved for t, we can plug back into the equations above to find that

 \begin{align} &\log(pe^{(1-q)t} + (1-p)e^{-qt}) = \log[e^{-qt}(1-p+pe^t)] \ &\qquad = \log\left[e^{-q \log\left(\frac{(1-p)q}{(1-q)p}\right)}\right] + \log\left[1-p+pe^{\log\left(\frac{1-p}{1-q}\right)}e^{\log\frac{q}{p}}\right] \ &\qquad = -q\log\frac{1-p}{1-q} -q \log\frac{q}{p} + \log\left[1-p+ p\left(\frac{1-p}{1-q}\right)\frac{q}{p}\right] \ &\qquad = -q\log\frac{1-p}{1-q} -q \log\frac{q}{p} + \log\left[\frac{(1-p)(1-q)}{1-q}+\frac{(1-p)q}{1-q}\right] \ &\qquad = -q\log\frac{q}{p} + (1-q)\log\frac{1-p}{1-q} = -D(q \| p). \end{align}

We now have our desired result, that

 \Pr\left[\frac{1}{m}\sum X_i \ge p + \varepsilon\right] \le e^{-D(p+\varepsilon\|p) m}.

To complete the proof for the symmetric case, we simply define the random variable Y_i = 1-X_i, apply the same proof, and plug it into our bound.

[edit]Simpler bounds

A simpler bound follows by relaxing the theorem using D( p + x \| p) \geq 2 x^2, which follows from the convexity of D(p+x\| p) and the fact that \frac{d^2}{dx^2} D(p+x\|p) = \frac{1}{(p+x)(1-p-x)}\geq 4=\frac{d^2}{dx^2}(2x^2). This results in a special case of Hoeffding's inequality. Sometimes, the bound D( (1+x) p \| p) \geq x^2 p/4 for -1/2 \leq x \leq 1/2, which is stronger for p<1/8, is also used.

[edit]Theorem for multiplicative form of Chernoff bound (relative error)

Let random variables X_1, X_2, \ldots, X_n be independent random variables taking on values 0 or 1. Further, assume that \Pr(X_i = 1) = p_i. Then, if we let X = \sum_{i=1}^n X_i and \mu be the expectation of X, for any \delta > 0

 \Pr \left[ X > (1+\delta)\mu\right] < \left(\frac{e^\delta}{(1+\delta)^{(1+\delta)}}\right)^\mu.
[edit]Proof

According to (1),

 \begin{align} \Pr[X > (1 + \delta)\mu)] & \le \inf_{t > 0} \frac{\mathbf{E}\left[\prod_{i=1}^n\exp(tX_i)\right]}{\exp(t(1+\delta)\mu)} \ & = \inf_{t > 0} \frac{\prod_{i=1}^n\mathbf{E}[\exp(tX_i)]}{\exp(t(1+\delta)\mu)} \ & = \inf_{t > 0} \frac{\prod_{i=1}^n\left[p_i\exp(t) + (1-p_i)\right]}{\exp(t(1+\delta)\mu)} \end{align}

The third line above follows because e^{tX_i} takes the value e^{t} with probability p_i and the value 1 with probability 1-p_i. This is identical to the calculation above in the proof of the Theorem for additive form (absolute error).

Rewriting p_ie^t + (1-p_i) as p_i(e^t-1) + 1 and recalling that 1+x \le e^x (with strict inequality if x > 0), we set x = p_i(e^t-1). The same result can be obtained by directly replacinga in the equation for the Chernoff bound with (1+\delta)\mu.[1]

Thus,

 \begin{align} &\Pr[X > (1+\delta)\mu] < \frac{\prod_{i=1}^n\exp(p_i(e^t-1))}{\exp(t(1+\delta)\mu)} \ &\qquad = \frac{\exp\left((e^t-1)\sum_{i=1}^n p_i\right)}{\exp(t(1+\delta)\mu)} = \frac{\exp((e^t-1)\mu)}{\exp(t(1+\delta)\mu)}. \end{align}

If we simply set t = \log(1+\delta) so that  t > 0 for \delta > 0, we can substitute and find

 \frac{\exp((e^t-1)\mu)}{\exp(t(1+\delta)\mu)} = \frac{\exp((1+\delta - 1)\mu)}{(1+\delta)^{(1+\delta)\mu}} = \left[\frac{\exp(\delta)}{(1+\delta)^{(1+\delta)}}\right]^\mu

This proves the result desired. A similar proof strategy can be used to show that

 \Pr[X < (1-\delta)\mu] < \exp(-\mu\delta^2/2).

[edit]Better Chernoff bounds for some special cases

We can obtain stronger bounds using simpler proof techniques for some special cases of symmetric random variables.

Let X_1, X_2, ..., X_n be independent random variables,

X = \sum_{i=1}^n X_i.

(a) \Pr(X_i = 1) = \Pr(X_i = -1) = \frac{1}{2}.

Then,

\Pr( X \ge a) \le e^{\frac{-a^2}{2n}}, \quad a > 0 ,

and therefore also

\Pr( |X| \ge a) \le 2e^{\frac{-a^2}{2n}}, \quad a > 0  .

(b) \Pr(X_i = 1) = \Pr(X_i = 0) = \frac{1}{2}, \mathbf{E}[X] = \mu = \frac{n}{2}

Then,

\Pr( X \ge \mu+a) \le e^{\frac{-2a^2}{n}}, \quad a > 0,
\Pr( X \ge (1+\delta)\mu) \le e^{-\frac{\delta^2\mu}{3}}, \quad \delta > 0,
\Pr( X \le \mu-a) \le e^{\frac{-2a^2}{n}}, \quad 0 < a < \mu,
\Pr( X \le (1-\delta)\mu) \le e^{-\frac{\delta^2\mu}{2}}, \quad 0 < \delta < 1.

[edit]Applications of Chernoff bound

Chernoff bounds have very useful applications in set balancing and packet routing in sparse networks.

The set balancing problem arises while designing statistical experiments. Typically while designing a statistical experiment, given the features of each participant in the experiment, we need to know how to divide the participants into 2 disjoint groups such that each feature is roughly as balanced as possible between the two groups. Refer to this book section for more info on the problem.

Chernoff bounds are also used to obtain tight bounds for permutation routing problems which reduce network congestion while routing packets in sparse networks. Refer to this book section for a thorough treatment of the problem.

[edit]Matrix Chernoff bound

Rudolf Ahlswede and Andreas Winter introduced (Ahlswede & Winter 2003) a Chernoff bound for matrix-valued random variables.

If  M  is distributed according to some distribution over  d \times d  matrices with zero mean, and if  M_1,M_2,\ldots,M_t  are independent copies of M then for any  \varepsilon > 0 ,

 \Pr \left( \bigg\Vert \frac{1}{t} \sum_{i=1}^t M_i - \mathbf{E}[M] \bigg\Vert_2 > \varepsilon \right) \leq d \exp \left( -C \frac{\varepsilon^2 t}{\gamma^2} \right).

where  \lVert M \rVert_2 \leq \gamma  holds almost surely and  C>0  is an absolute constant.

Notice that the number of samples in the inequality depends logarithmically on d. In general, unfortunately, such a dependency is inevitable: take for example a diagonal random sign matrix of dimension d. The operator norm of the sum of t independent samples is precisely the maximum deviation among d independent random walks of length t. In order to achieve a fixed bound on the maximum deviation with constant probability, it is easy to see that t should grow logarithmically with d in this scenario.[2]

The following theorem can be obtained by assuming M has low rank, in order to avoid the dependency on the dimensions.

[edit]Theorem without the dependency on the dimensions

Let  0<\varepsilon<1  and  M  be a random symmetric real matrix with  \Vert \mathbf{E}[M] \Vert_2 \leq 1  and  \Vert M \Vert_2 \leq \gamma  almost surely. Assume that each element on the support of  M  has at most rank  r . Set

 t = \Omega \left( \frac{\gamma\log (\gamma/\varepsilon^2)}{\varepsilon^2} \right) .

If  r \leq t  holds almost surely, then

 \Pr \left( \bigg\Vert \frac{1}{t} \sum_{i=1}^t M_i - \mathbf{E}[M] \bigg\Vert_2 > \varepsilon \right) \leq \frac{1}{\mathbf{poly}(t)}

where  M_1,M_2,\ldots,M_t  are i.i.d. copies of M.

 

From: http://en.wikipedia.org/wiki/Chernoff_bound

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值