常用算法复杂度分析

Hi there!  This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science.  When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them.  Over the last few years, I've interviewed at several Silicon Valley startups, and also some bigger companies, like Yahoo, eBay, LinkedIn, and Google, and each time that I prepared for an interview, I thought to msyelf "Why oh why hasn't someone created a nice Big-O cheat sheet?".  So, to save all of you fine folks a ton of time, I went ahead and created one.  Enjoy!

GoodFairPoor

Searching

AlgorithmData StructureTime ComplexitySpace Complexity
  AverageWorstWorst
Depth First Search (DFS)Graph of |V| vertices and |E| edges-O(|E| + |V|)O(|V|)
Breadth First Search (BFS)Graph of |V| vertices and |E| edges-O(|E| + |V|)O(|V|)
Binary searchSorted array of n elementsO(log(n))O(log(n))O(1)
Linear (Brute Force)ArrayO(n)O(n)O(1)
Shortest path by Dijkstra,
using a Min-heap as priority queue
Graph with |V| vertices and |E| edgesO((|V| + |E|) log |V|)O((|V| + |E|) log |V|)O(|V|)
Shortest path by Dijkstra,
using an unsorted array as priority queue
Graph with |V| vertices and |E| edgesO(|V|^2)O(|V|^2)O(|V|)
Shortest path by Bellman-FordGraph with |V| vertices and |E| edgesO(|V||E|)O(|V||E|)O(|V|)

Sorting

AlgorithmData StructureTime ComplexityWorst Case Auxiliary Space Complexity
  BestAverageWorstWorst
QuicksortArrayO(n log(n))O(n log(n))O(n^2)O(log(n))
MergesortArrayO(n log(n))O(n log(n))O(n log(n))O(n)
HeapsortArrayO(n log(n))O(n log(n))O(n log(n))O(1)
Bubble SortArrayO(n)O(n^2)O(n^2)O(1)
Insertion SortArrayO(n)O(n^2)O(n^2)O(1)
Select SortArrayO(n^2)O(n^2)O(n^2)O(1)
Bucket SortArrayO(n+k)O(n+k)O(n^2)O(nk)
Radix SortArrayO(nk)O(nk)O(nk)O(n+k)

Data Structures

Data StructureTime ComplexitySpace Complexity
 AverageWorstWorst
 IndexingSearchInsertionDeletionIndexingSearchInsertionDeletion 
Basic ArrayO(1)O(n)--O(1)O(n)--O(n)
Dynamic ArrayO(1)O(n)O(n)-O(1)O(n)O(n)-O(n)
Singly-Linked ListO(n)O(n)O(1)O(1)O(n)O(n)O(1)O(1)O(n)
Doubly-Linked ListO(n)O(n)O(1)O(1)O(n)O(n)O(1)O(1)O(n)
Skip ListO(n)O(log(n))O(log(n))O(log(n))O(n)O(n)O(n)O(n)O(n log(n))
Hash Table-O(1)O(1)O(1)-O(n)O(n)O(n)O(n)
Binary Search Tree-O(log(n))O(log(n))O(log(n))-O(n)O(n)O(n)O(n)
B-Tree-O(log(n))O(log(n))O(log(n))-O(log(n))O(log(n))O(log(n))O(n)
Red-Black Tree-O(log(n))O(log(n))O(log(n))-O(log(n))O(log(n))O(log(n))O(n)
AVL Tree-O(log(n))O(log(n))O(log(n))-O(log(n))O(log(n))O(log(n))O(n)

Heaps

HeapsTime Complexity
 HeapifyFind MaxExtract MaxIncrease KeyInsertDeleteMerge 
Linked List (sorted)-O(1)O(1)O(n)O(n)O(1)O(m+n)
Linked List (unsorted)-O(n)O(n)O(1)O(1)O(1)O(1)
Binary HeapO(log(n))O(1)O(log(n))O(log(n))O(log(n))O(log(n))O(m+n)
Binomial Heap-O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))O(log(n))
Fibonacci Heap-O(1)O(log(n))*O(1)*O(1)O(log(n))*O(1)

Graphs

Node / Edge ManagementStorageAdd VertexAdd EdgeRemove VertexRemove EdgeQuery
Adjacency listO(|V|+|E|)O(1)O(1)O(|V| + |E|)O(|E|)O(|V|)
Incidence listO(|V|+|E|)O(1)O(1)O(|E|)O(|E|)O(|E|)
Adjacency matrixO(|V|^2)O(|V|^2)O(1)O(|V|^2)O(1)O(1)
Incidence matrixO(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|V| ⋅ |E|)O(|E|)

Notation for asymptotic growth

letterboundgrowth
(theta) Θupper and lower, tight[1]equal[2]
(big-oh) Oupper, tightness unknownless than or equal[3]
(small-oh) oupper, not tightless than
(big omega) Ωlower, tightness unknowngreater than or equal
(small omega) ωlower, not tightgreater than

[1] Big O is the upper bound, while Omega is the lower bound. Theta requires both Big O and Omega, so that's why it's referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n).SO

[2] f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. In other words, the growth rate of f(x) is asymptotically proportional to g(n).

[3] Same thing. Here the growth rate is no faster than g(n). big-oh is the most useful because represents the worst-case behavior.

In short, if algorithm is __ then its performance is __
algorithmperformance
o(n)< n
O(n)≤ n
Θ(n)= n
Ω(n)≥ n
ω(n)> n

Big-O Complexity Chart

Big O Complexity Graph

转自:http://bigocheatsheet.com/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值