Notations:
F(n), G(n), running time of an algorithm with input sie n, non-negative
“when n large enough, condition A holds” = “there is a constant X such that for n > X, condition A holds”
A/B, the integer division. results in an integer with truncation, i.e. 3/2 = 1
#of X = |X| = the size of X = number of items in X = #X
Asymptotica running time:
**Little-o notation: ** f(x) = o(g(x)) as $x\to a $ (usually a is 0 or infinite), if
lim
x
→
a
(
f
(
x
)
/
g
(
x
)
)
=
0
\lim_{x\to a} (f(x)/g(x)) = 0
x→alim(f(x)/g(x))=0
example: if f(x) = o(g(x)), as x goes to 0, and f(0) = 0 = g(0) then: f(x) goes to 0 faster than g(x)
as x goes to 0, x^3 = o(x^2)
as x goes to infinite, x^N = o(2^x)
Big-O Notation: We say f(n) is at most O(g(n)) if ∃ C , s . t . , ∃ N 0 s . t . f ( n ) ≤ C g ( n ) , f o r n > N 0 \exists C, s.t., \exists N_0 s.t. f(n) \le Cg(n), for \, n >N_0 ∃C,s.t.,∃N0s.t.f(n)≤Cg(n),forn>N0
Written as f(n) <= O(g(n)) or f(n) = O(g(n)). Here the equality isn’t traditional meaning!
Claim: If F(n) is o(G(n)) as n goes to infinite, then F(n) is O(G(n)). But the reverse of this is not true.
Warning: f(n) = O(g(n)) does not imply $\lim_{n\to \infty} f(n) / g(n) $ exists.
Properties:
- P1: if x(n) = O(g(n)), then C x(n) = O(g(n)) if C is a constant.
- P2 transitivity: if x(n) = O(g(n)), g(n) = O(h(n)), then x(n) = O(h(n))
- P3: if x(n) = O(g(n)), y(n) = O(h(n)), g(n) = O(h(n)), then x(n) + y(n) = O(h(n))
Tradition:
- If f(n) is a constant, we write f(n) = O(1)
- Write f(n) = O(…) if f(n) <= O(…)
- Write f(n) < O(g(n)) if f(n) = O(g(n))
Facts:
- Fact1: log(N) <= P(N^c) for any constant C > 0
- Fact2: log(log(N)) <= log(N)
- Fact3: C>0, N^c <= O(N!)
- Fact4: C>0, N^c <= O(2^N)
What’s input size:
say, we have an array of N number, each number is 4 bytes, what’s the input size?
- 4N (bytes to encode)= O(N)
say if our input is an integer number with maximum value N, what’s its input size?
- Answer is O(log N). The input size is measured in bytes
Claim: GCD ends in log(A) round
we assume A>B, idea is: we want to show each iteration, the numbers are reduced by at least half, then T = log(A) rounds, we reach one. 2^T = A
Notice that, gcd(A,B) = gcd(B, A-xB) where x is at least 1. A-xB must be less than B by definition.
Case 1: B <= A/2, done. Then A-xB < B <= A/2
Case 2: B > A/2, then pick x = 1, A - xB < A/2.
so we get gcd(A, B) = gcd(C, D) where at least one of C and D is less than A/2. Doing one more step,
gcd(C,D) = gcd(E,f) where E is D and F<E, so both E and F are less than A/2. Hence A reduced by half in a constant round (2 round here), so gcd will end in O(logA) rounds.
(Bug in this proof: when it ends? how to find x? )
Big Omega, Theta Notation
We say f ( n ) = Ω ( g ( n ) ) f(n) = \Omega (g(n)) f(n)=Ω(g(n)) or f ( n ) ≥ Ω ( g ( n ) ) f(n)\ge \Omega(g(n)) f(n)≥Ω(g(n)) if ∃ C s . t . f ( n ) ≥ C g ( n ) \exists C s.t. f(n) \ge Cg(n) ∃Cs.t.f(n)≥Cg(n) for n large enough
When f ( n ) ≥ Ω ( g ( n ) ) f(n) \ge \Omega(g(n)) f(n)≥Ω(g(n)) and f ( n ) ≤ O ( g ( n ) ) f(n) \le O(g(n)) f(n)≤O(g(n)) then we say f ( n ) = Θ ( g ( n ) ) f(n) = \Theta(g(n)) f(n)=Θ(g(n)), meaning f and g are of the same order.
Properties:
- If $f(n) = \Theta(g(n)), $ then lim n → ∞ f ( n ) / g ( n ) \lim_{n\to\infty} f(n)/g(n) limn→∞f(n)/g(n) can’t be zero nor infinity, the limit might not exist, but the ratio is bounded by a constant
- If f ( n ) = Ω ( g ( n ) ) f(n) = \Omega (g(n)) f(n)=Ω(g(n)), then g ( n ) = O ( f ( n ) ) g(n) = O(f(n)) g(n)=O(f(n))
- If f ( n ) = O ( g ( n ) ) f(n) = O(g(n)) f(n)=O(g(n)), then g ( n ) = Ω ( f ( n ) ) g(n) = \Omega(f(n)) g(n)=Ω(f(n))
worst case time complexity: F(N) = m a x i ∈ I ( N ) T ( i ) max_{i\in I(N)} T(i) maxi∈I(N)T(i), where I(N) is the set of input of size N.
average case time complexity: F(N ) = E[T(x)] for x in I(N), here we assume that the expectation is over the distribution D defined on domain I.