算法复习题

  1. The O-notation provides an asymptotic upper bound. The W-notation provides an asymptotic lower bound. The Θ-notation asymptotically a function form above and below.

O型符号提供一个渐近的上限Θ符号提供一个渐近下界 Θ-符号渐近函数形式的上方和下方。

  1. To represent a heap as an array,the root of tree is A[1], and given the index i of a node, the indices of its parent Parent(i) { return ëi/2û; }left child, Left(i) { return 2*i; }right child, right(i) { return 2*i + 1; }.

代表一个堆中的一个数组,树的根节点是A[1],并且给出一个节点i,那么该节点的父节点是        左孩子        右孩子        

  1. Because the heap of n elements is a binary tree, the height of any node is at most Q(lg n).

因为n个元素的堆是一个二叉树,任意节点的树高最多是        

  1. In optimization problems , there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem.

  最优化问题   中,有很多可能的解,每个解都有一个值,我们希望找到一个最优解(最大或最小),我们称这个解为最优解问题。

  1. optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems.

       最优子结构     中问题的最优解,至少包含它的最优解的子问题。

  1. A subsequence of X if there exists a strictly increasing sequence <i1,i2, ..., ik> of indices of X such that for all j = 1, 2, ..., k, we have xij = zj .

Let X = <x1, x2, ..., xm> and Y = <y1, y2, ..., yn> be sequences, and let Z = <z1, z2, ..., zk> be any LCS of X and Y.

(1). If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.

(2). If xm yn, then zk xm implies that Z is an LCS of Xm-1 and Y.

(3). If xm yn, then zk yn implies that Z is an LCS of X and Yn-1.

  1. A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution.

贪心算法  经常需要在某个时刻寻找最好的选择。正因如此,它在当下找到希望中的最优选择,以便引导出一个全局的最优解。

  1. The greedy-choice property and optimal sub-structure are the two key ingredients of greedy algorithm.

贪心选择  最优子结构是贪心算法的两个重要组成部分。

  1. When a recursive algorithm revisits the same problem over and over again, we say that the optimization problem has overlapping subproblems.

当一个递归算法一遍一遍的遍历同一个问题时,我们说这个最优化问题是 重叠子问题。

  1. greedy-choice property is a globally optimal solution can be arrived at by making a locally optimal (greedy) choice.

   贪心选择性质 是一个全局的最优解,这个最优解可以做一个全局的最优选择。

  1. An approach of Matrix multiplication can develope a Θ(V4)-time algorithm for the all-pairs shortest-paths problem and then improve its running time to Θ(V3 lg V).

  一个矩阵相乘问题的解决可以一个        时间复杂度算法的所有路径的最短路径问题,改进后的时间复杂度是         。

  1. Floyd-Warshall algorithm, runs in Θ(V3) time to solve the all-pairs shortest-paths problem.

FW算法在     时间复杂度下可以解决最短路径问题。

  1. The running time of Quick Sort is O(n2) in the worst case, and O(n lg n) in the average case.

快速排序的平均时间复杂度是   O(n lg n)    ,最坏时间复杂度是 O(n2) 。

  1. The MERGE(A,p,q,r) procedure in merge sort takes time Θ(n).

MERGE在归并排序中所花费的时间是        

  1. Given a weighted, directed graph G = (V, E) with source s and weight function w : E  R, the Bellman-Ford algorithm makes |V| - 1 passes over the edges of the graph.

给一个带权重的有向图G = (V, E),权重关系w : E  R,the Bellman-Ford算法需经过    条边。

  1. The Bellman-Ford algorithm runs in time O(V E).

Bellman ford 算法的时间复杂度是     

  1. A decision tree represents the comparisons made by a comparison sort.The asymptotic height of any decision tree for sorting n elements is W(n lg n).

一个决策树代表一个比较类型,通过比较排序。N个元素的任意决策树的渐进高度是        

True-false questions

  1. An algorithm is said to be correct if, for some input instance, it halts with the correct output   F

如果给一个算法输入一些实例,并且它给力正确的输出,则认识这个算法是正确的。

  1. Insertion sort always best merge sort F

插入排序总是优越与归并排序。

  1. Θ(n lg n) grows more slowly than Θ(n2). Therefore, merge sort asymptotically beats insertion sort in the worst case.  T

Θ(n lg n)

  1. Currently computers are fast and computer memory is very cheap, we have no reason to study algorithms.  F
  2. In RAM (Random-Access Machine) model, instructions are executed with concurrent operations.  F
  3. The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed.     T
  4. Quick sorts, have no combining step: two subarrays form an already-sorted array.  T
  5. The running time of Counting sort is O(n + k). But the running time of sorting is W(n lg n). So this is contradiction.  F
  6. The Counting sort is stable.       T
  7. In the selection problem, there is a algorithm of theoretical interest only with O(n) worst-case running time.    T
  8. Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems.         T
  9. In dynamic programming, we build an optimal solution to the problem from optimal solutions to subproblems.         T
  10. The best-case running time is the longest running time for any input of size n.    F
  11. When we analyze the running time of an algorithm, we actually interested on the rate of growth (order of growth).         T
  12. The dynamic programming approach means that it break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem.    T
  13. Insertion sort and merge sort both use divide-and-conquer approach.    F
  14. Θ(g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1 g(n) ≤ f (n) ≤ c2 g(n) for all n n0 }
  15. Min-Heaps satisfy the heap property: A[Parent(i)] ³ A[i] for all nodes i > 1. F
  16. For array of length n, all elements in range A[ën/2û + 1 .. n] are heaps.      T
  17. The tighter bound of the running time to build a max-heap from an unordered array isnt in linear time.         F
  18. The call to BuildHeap() takes O(n) time, Each of the n - 1 calls to Heapify() takes O(lg n) time, Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n)= O(n) + O(n lg n)= O(n lg n).          T
  19. Quick Sort is a dynamic programming algorithm. The array A[p..r] is partitioned into two non-empty subarrays A[p..q] and A[q+1..r], All elements in A[p..q] are less than all elements in A[q+1..r], the subarrays are recursively sorted by calls to quicksort.          F
  20. Assume that we have a connected, undirected graph G = (V, E) with a weight function w : E R, and we wish to find a minimum spanning tree for G. Both Kruskal and Prim algorithms use a dynamic programming approach to the problem.            F
  21. A cut (S, V - S) of an undirected graph G = (V, E) is a partition of E.    F
  22. An edge is a light edge crossing a cut if its weight is the maximum of any edge crossing the cut.        F
  23. Kruskal's algorithm uses a disjoint-set data structure to maintain several disjoint sets of elements.           T
  24. Optimal-substructure property is a hallmark of the applicability of both dynamic programming.            T
  25. Dijkstra's algorithm is a dynamic programming algorithm.          F
  26. Floyd-Warshall algorithm, which finds shortest paths between all pairs of vertices , is a greedy algorithm.                  F
  27. Given a weighted, directed graph G = (V, E) with weight function w : E  R, let p = <v1,v2,..., vk_>be a shortest path from vertex v1 to vertex vk and, for any i and j such that 1  i  j k, let pij = <vi, vi+1,..., vj> be the subpath of p from vertex vi to vertex vj . Then, pij is a shortest path from vi to vj.       T
  28. Given a weighted, directed graph G = (V, E) with weight function w : E  R,If there is a negative-weight cycle on some path from s to v , there exists a shortest-path from s to v.            F
  29. Since any acyclic path in a graph G = (V, E) contains at most |V| distinct vertices, it also contains at most |V| - 1 edges. Thus, we can restrict our attention to shortest paths of at most |V| - 1 edges.             T
  30. The process of relaxing an edge (u, v) tests whether we can improve the shortest path to v found so far by going through u.     T
  31. In Dijkstra's algorithm and the shortest-paths algorithm for directed acyclic graphs, each edge is relaxed exactly once. In the Bellman-Ford algorithm, each edge is also relaxed exactly once .     F
  32. The Bellman-Ford algorithm solves the single-source shortest-paths problem in the general case in which edge weights must be negative.    F
  33. Given a weighted, directed graph G = (V, E) with source s and weight function w : E  R, the Bellman-Ford algorithm can not return a Boolean value indicating whether or not there is a negative-weight cycle that is reachable from the source.   F
  34. Given a weighted, directed graph G = (V, E) with source s and weight function w : E  R, for the Bellman-Ford algorithm, if there is such a cycle, the algorithm indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest paths and their weights.        F
  35. Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E) for the case in which all edge weights are negative.     F
  36. Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E) for the case in which all edge weights are nonnegative. Bellman-Ford algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E), the running time of Dijkstra's algorithm is lower than that of the Bellman-Ford algorithm.    T
  37. The steps for developing a dynamic-programming algorithm:1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom-up fashion. 4. Construct an optimal solution from computed information.    T

 

三 Each of n input elements is an integer in the range 0 to k, Design a linear running time algorithm to sort n elements.

四Design a expected linear running time algorithm to find the ith smallest element of n elements using divide and conquer strategy.

五Write the INSERT-SORT procedure to sort into non-decreasing order. Analyze the running time of it with RAM Model. What’s the best-case running time, the worst-case running time and average case running time. Write the MERGE-SORT procedure to sort into non-decreasing order. Give the recurrence for the worst-case running time T(n) of Merge sort and find the solution to the recurrence.

 

六 What is an optimal Huffman code for the following set of frequencies, <a:45, b:13, c:12,d:16,e:9,f:5>  

七 The traveling-salesman problem (TSP): in the traveling-salesman problem, we are given a complete undirected graph G=(V,E) that has a nonnegative integer cost c(u,v) associated with each edge (u,v)ÎE , and we must find a tour of G with minimum cost. The following is an instance TSP. Please compute a tour with minimum cost with greedy algorithm.

八Given items of different values and weights, find the most valuable set of items that fit in a knapsack of fixed weight C .For an instance of knapsack problem, n=8, C=110,value V={11,21,31,33,43,53,55,65} weight W={1,11,21,23,33,43,45,55}. Use greedy algorithms to solve knapsack problem. 

 

Use dynamic programming to solve Assembly-line scheduling problem: A Motors Corporation produces automobiles that has two assembly lines, numbered i=1,2. Each line has n stations, numbered j=1,2…n. We denote the jth station on line i by Sij. The following figure is an instance of the assembly-line problem with costs entry time ei, exit time xi, the assembly time required at station Sij  by aij, the time to transfer a chassis away from assembly line I after having gone through station Sij is tij. Please compute the fastest time and construct the fastest way through the factory of the instance.

             7         9         3           4         8         4

        2                                                                  3

                  2         3          1           3          4

 

entrance                                                                     exit  

                  2         1          2           2          1

        4                                                                    2

8         5         6            4         5         7

 

. The matrix-chain multiplication problem can be stated as follows: given a chain <A1,A2,…,An>of matrices, where for i=12…n, matrix Ai has dimension

 

P i-1´ Pi, fully parenthesize the product A1,A2,…,An in a way that minimizes the number of scalar multiplication. We pick as our subproblems the problems of determining the minimum cost of a parenthesization of Ai Ai+1 Aj for 1  i  j  n. Let m[i, j] be the minimum number of scalar multiplications needed to compute the matrix Ai..j; for the full problem, the cost of a cheapest way to compute A1..n would thus be m[1, n]. Can you define m[i, j] recursively? Find an optimal parenthesization of a matrix-chain product whose sequence of dimensions is <4,3,5,2,3> 

十一 In the longest-common-subsequence (LCS) problem, we are given two sequences X = <x1, x2, ...,xm> and Y = <y1, y2, ..., yn> and wish to find a maximum-length common subsequence of X and Y. Please write its recursive formula and determine an LSC of Sequence S1=ACTGATCG  and sequence S2=CATGC. Please fill in the blanks in the table below.

C    A    T   G    C

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

十二 Proof: Any comparison sort algorithm requires Ω(nlgn) comparisons in the worst case.

十三Proof: Subpaths of shortest paths are shortest paths. 

Given a weighted, directed graph G = (V, E) with weight function w : E  R, let p =<v1,v2,..., vk> be a shortest path from vertex v1 to vertex vk and, for any i and j such that 1  i  j k, let pij = <vi, vi+1,..., vj>be the subpath of p from vertex vi to vertex vj . Then, pij is a shortest path from vi to vj.

十四Proof : The worst case running time of quicksort is  Θ(n2)  

十五Compute shortest paths with matrix multiplication and the Floyd-Warshall algorithm for the following graph.

十六 Write the MAX-Heapify() procedure to for manipulating max-heaps. And analyze the running time of MAX-Heapify().

 

三(10分)

1 CountingSort(A, B, k)

2 for i=1 to k

3 C[i]= 0;

4 for j=1 to n

5 C[A[j]] += 1;

6 for i=2 to k

7 C[i] = C[i] + C[i-1];

8 for j=n downto 1

9 B[C[A[j]]] = A[j];

10 C[A[j]] -= 1;

算法描述3分

The best-case running time is T(n) = c1n + c2(n - 1) + c4(n - 1) + c5(n - 1) + c8(n - 1)

= (c1 + c2 + c4 + c5 + c8)n - (c2+ c4 + c5 + c8). This running time can be expressed as an + b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n.

This worst-case running time can be expressed as an2 + bn + c for constants a, b, and c that again depend on the statement costs ci ; it is thus a quadratic function of n.

分析2分

算法描述2分

Θ(1) if n = 1

T(n) =

          2T(n/2) + Θ(n) if n > 1.

递归方程和求解3分

7 RAND-SELECT(A, p, r, i)  (5分)

if p = r then return A[p]

q ← RAND-PARTITION(A, p, r)

k ← q – p + 1

if i = k then return A[q]

if i < k

then return RAND-SELECT(A, p, q – 1, i )

else return RAND-SELECT(A, q + 1, r, i – k )

 

Randomized RANDOMIZED-PARTITION(A; p; r) (5分)

 { i ←RANDOM(p, r)

  exchange A[r] ← A[i]

return PARTITION(A; p; r)}

PARTITION(A; p; r)

{  x← A[r]

i ←p-1

for j ← p to r-1

do if A[j] ≤ x

then i ←i+1

exchange A[i] «A[j]

exchange A[i+1] « A[r]

return  i+1

}

 

首先画出它对应的图,加上标号,假设从1出发,每次贪心选择一个权重最小的顶点作为下一个要去的城市。(算法策略5分)

 

求解过程5分

 

<a:45, b:13, c:12,d:16,e:9,f:5>  

             

a:1  b:100  c:101   d:111   e:1100   f:1101  

 

 

V={11,21,31,33,43,53,55,65} weight W={1,11,21,23,33,43,45,55}

按照单位重量的价值排序,,然后按照该顺序往背包中放。

递归方程4分

f1[1]=9    f2[1]=12

f1[2]=18    f2[2]=16

f1[3]=20    f2[3]=22

f1[4]=24    f2[4]=25

f1[5]=32    f2[5]=30

f1[6]=35    f2[6]=37

the fastest time is 38 and the fastest way is:

station 1:line 1

station 2:line 2

station 3:line 1

station 4:line 2

station 5: line 2

station 6: line 1

求解6

递归方程4分

m[1,1]=0 m[2,2]=0 m[3,3]=0 m[4,4]=0

m[1,2]=m[1,1]+m[2,3]+p0*p1*p2=60

m[2,3]=m[2,2]+m[3,3]+p1*p2*p3=30

m[3,4]=m[3,3]+m[4,4]+p2*p3*p4=30

m[1,3]=min{m[1,2]+m[3,3]+p0*p2*p3, m[1,1]+m[2,3]+p0*p1*p3}=54

m[2,4]=min{m[2,3]+m[4,4]+p1*p3*p4, m[2,2]+m[3,4]+p0*p2*p4}=48

m[1,4]=min{m[1,1] +m[2,4]+p0*p1*p4, m[1,2]+m[3,4]+p0*p2*p4,

m[1,3]+m[4,4]+p0*p3*p4}=78

 

((A1(A2A3))A4)

求解6

十一

 

 

递归方程4分

 

               C   A    T   G    C

0

0

0

0

0

0

0

0

1

1

1

1

0

1

1

1

1

2

0

1

1

2

2

2

0

1

1

2

3

3

0

1

1

2

3

3

0

1

1

2

3

4

0

1

1

2

3

4

 

最长公共子序列长度为4  AGTC

求解6

十二  From the preceding discussion, it suffices to determine the height of a decision tree in which each permutation appears as a reachable leaf. Consider a decision tree of height h with l reachable leaves corresponding to a comparison sort on n elements. Because each of the n! permutations of the input appears as some leaf, we have n!  l. Since a binary tree of height h has no more than 2h leaves, we have(分析5分)

n!  l 2h ,

which, by taking logarithms, implies 

h ³ lg(n!) (since the lg function is monotonically increasing)

= W(n lg n)

列式和求解5分

十三

Proof: If we decompose path p into v1® vi® vj® vk, then we have that w(p) = w(p1i) + w(pij) +w(pjk). Now, assume that there is a path p’ij from vi to vj with weight w(p’ij)< w(pij) . Then, v1® vi® vj® vk is a path from v1 to vk whose weight w(p1i) + w(p’ij) +w(pjk)is less than w(p), which contradicts the assumption that p is a shortest path from v1 to vk.

反证法假设5分,分析5分

十四

列式5分,求解5分

十五

matrix multiplication:

 

 

 

 

 

5分

Floyd-Warshall algorithm:

 

 

 

 

 

 

 

 

 

 

十六

Heapify(A, i)

{

l = Left(i); r = Right(i);

if (l <= heap_size(A) && A[l] > A[i])

largest = l;

else

largest = i;

if (r <= heap_size(A) && A[r] > A[largest])

largest = r;

if (largest != i)

Swap(A, i, largest);

Heapify(A, largest);

}

Fixing up relationships between i, l, and r takes Q(1) time,If the heap at i has n elements, the subtrees at l or r can have 2n/3 elements. So time taken by Heapify() is given by T(n) £ T(2n/3) + Q(1) ,by recursive tree, the solution is T(n) = O(lg n)

.算法描述4分 列递归方程3分,求解3分

一、判断题

  1. An algorithm is said to be correct if, for some input instance, it halts with the correct output (p6).

  1. Insertion sort always best merge sort. 

插入排序总是优于归并排序。

  1. Θ(n lg n) grows more slowly than Θ(n2). Therefore, merge sort asymptotically beats insertion sort in the worst case.

Θ(n lg n)增长的比Θ(n2).慢。然而,在最坏情况下合并排序是渐近优于插入排序的。

  1. Currently computers are fast and computer memory is very cheap, we have no reason to study algorithms.

计算机运行速度快并且存储非常便宜,我们没有理由去学习算法。

  1. In RAM(random-access machine) model, instructions are exected one after another, with no concurrent operations. (p21)

  1. The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed. (p23)

  1. The best-case running time of insertion sort is quadratic function of input size n.(p25)

  1. The worst-case running time is the longest running time for any input of size n. (p26)

  1. When we analyze the running time of an algorithm, we actually interested on the rate of growth (order of growth). (p26)

  1. The divide-and-conquer approach means that it break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem. (p28)

  1. Insertion sort and merge sort both use divide-and-conquer approach.

插入算法和归并算法都用了分治策略。

  1. The MERGE(A,p,q,r) procedure in merge sort takes timeΘ(n2). (p28)

  1. Θ(g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1 g(n) ≤ f (n) ≤ c2 g(n) for all n n0 }  (p42)

  1. The O-notation provides an asymptotic lower bound. The W-notation provides an asymptotic lower bound. The Θ-notation asymptotically a function form above and below. (p44-45)

1 To represent a heap as an array,the root of tree is A[1], and given the index i of a node, the indices of its parent Parent(i) { return ëi/2û; }left child, Left(i) { return 2*i; }right child, right(i) { return 2*i + 1; }.

2 min-Heaps satisfy the heap property: A[Parent(i)] ³ A[i] for all nodes i > 1.

3 Because the heap of n elements is a binary tree, the height of any node is at most Q(lg n).

 

4 for array of length n, all elements in range A[ën/2û + 1 .. n] are heaps

5 the running time of build a heap is O(n lg n).

6 The tighter bound of the running time to build a max-heap from an unordered array in linear time.

 

7 The call to BuildHeap() takes O(n) time, Each of the n - 1 calls to Heapify() takes O(lg n) time, Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n)= O(n) + O(n lg n)= O(n lg n).

8 A priority queue is a data structure for maintaining a set S of elements, each with an associated value or key.

9 The running time of Quick Sort is O(n lg n) in the average case, and O(n2) in the worst case.

快速排序最坏情况下:O(n2) 最好情况下:O(n lg n) 平衡情况下:O(n lg n)

10 Quick Sort is a divide-and-conquer algorithm. The array A[p..r] is partitioned into two non-empty subarrays A[p..q] and A[q+1..r], All elements in A[p..q] are less than all elements in A[q+1..r], the subarrays are recursively sorted by calls to quicksort.

11 Quick sorts, unlike merge sorts, have no combining step: two subarrays form an already-sorted array.

12 A decision tree represents the comparisons made by a comparison sort.  

13 The asymptotic height of any decision tree for sorting n elements is W(n lg n).

14 The running time of Counting sort is O(n + k). But the running time of sorting is W(n lg n). So this is contradiction.

15 The Counting sort is stable.

16 The radix sort can be used on card sorting.

17 In radix sort, Sort elements by digit starting with least significant, Use a stable sort (like counting sort) for each stage. 

18 In the selection problem, finding the ith smallest element of a set, there is a practical randomized algorithm with O(n) expected running time.

19 In the selection problem, there is a algorithm of theoretical interest only with O(n) worst-case running time.

 

1 Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. 

2 In optimization problems, there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem.

3 optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems.

4 In dynamic programming, we build an optimal solution to the problem from optimal solutions to subproblems.

5 When a recursive algorithm revisits the same problem over and over again, we say that the optimization problem has overlapping subproblems.

6  A subsequence of X if there exists a strictly increasing sequence <i1,i2, ..., ik> of indices of X such that for all j = 1, 2, ..., k, we have xij = zj .

7 Let X = <x1, x2, ..., xm> and Y = <y1, y2, ..., yn> be sequences, and let Z = <z1, z2, ..., zk> be any LCS of X and Y.

1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.

2. If xm  yn, then zk  xm implies that Z is an LCS of Xm-1 and Y.

3. If xm  yn, then zk  yn implies that Z is an LCS of X and Yn-1.

  1. Kruskal's algorithm and Prim's algorithm can easily be made to run in time O(E lg V) using ordinary binary heaps.

  1. Assume that we have a connected, undirected graph G = (V, E) with a weight function w : E R, and we wish to find a minimum spanning tree for G. Both Kruskal and Prim algorithms use a dynamic programming approach to the problem.

  1. A cut (S, V - S) of an undirected graph G = (V, E) is a partition of V.

  1. An edge is a light edge crossing a cut if its weight is the maximum of any edge crossing the cut.

  1. Kruskal's algorithm uses a disjoint-set data structure to maintain several disjoint sets of elements.

  1. given a graph G = (V, E), we want to find a shortest path from a given source vertex s ΠV to each vertex vÎV . This problem is defined single-source shortest-paths problem.

  1. Optimal-substructure property is a hallmark of the applicability of both dynamic programming.

  1. Dijkstra's algorithm is a dynamic programming algorithm.

  1. Floyd-Warshall algorithm, which finds shortest paths between all pairs of vertices , is a greedy algorithm.

  1. Given a weighted, directed graph G = (V, E) with weight function w : E  R, let p = <v1,v2,..., vk_>be a shortest path from vertex v1 to vertex vk and, for any i and j such that 1  i  j k, let pij = <vi, vi+1,..., vj> be the subpath of p from vertex vi to vertex vj . Then, pij is a shortest path from vi to vj.

  1. Given a weighted, directed graph G = (V, E) with weight function w : E  R,If there is a negative-weight cycle on some path from s to v , there exists a shortest-path from s to v.

  1. Since any acyclic path in a graph G = (V, E) contains at most |V| distinct vertices, it also contains at most |V| - 1 edges. Thus, we can restrict our attention to shortest paths of at most |V| - 1 edges.

  1. The process of relaxing an edge (u, v) tests whether we can improve the shortest path to v found so far by going through u.

  1. In Dijkstra's algorithm and the shortest-paths algorithm for directed acyclic graphs, each edge is relaxed exactly once. In the Bellman-Ford algorithm, each edge is relaxed many times.

  1. The Bellman-Ford algorithm solves the single-source shortest-paths problem in the general case in which edge weights must be negative.

  1. Given a weighted, directed graph G = (V, E) with source s and weight function w : E  R, the Bellman-Ford algorithm can not return a Boolean value indicating whether or not there is a negative-weight cycle that is reachable from the source.

  1. Given a weighted, directed graph G = (V, E) with source s and weight function w : E  R, for the Bellman-Ford algorithm, if there is such a cycle, the algorithm indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest paths and their weights.

  1. Given a weighted, directed graph G = (V, E) with source s and weight function w : E  R, the Bellman-Ford algorithm makes |V| - 1 passes over the edges of the graph.

  1. The Bellman-Ford algorithm runs in time O(V E).

  1. Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E) for the case in which all edge weights are negative.(见21

 

  1. Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E) for the case in which all edge weights are nonnegative. Bellman-Ford algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E), the running time of Dijkstra's algorithm is lower than that of the Bellman-Ford algorithm.

  1. Dijkstra's algorithm maintains a set S of vertices whose final shortest-path weights from the source s have already been determined. The algorithm repeatedly selects the vertex u ΠV S with the minimum shortest-path estimate, adds u to S, and relaxes all edges leaving u.

  1. The steps for developing a dynamic-programming algorithm:1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom-up fashion. 4. Construct an optimal solution from computed information.

  1. An approach of Matrix multiplication can develope a Θ(V4)-time algorithm for the all-pairs shortest-paths problem and then improve its running time to Θ(V3 lg V).
  2. Floyd-Warshall algorithm, runs in Θ(V3) time to solve the all-pairs shortest-paths problem.

 

 

计算

1、Kruskal算法

2、Prim算法

 

3、Bellman-Ford算法

 

4、Dijkstra算法

 

5、Matrix-multiply算法

 

 

6、Floyd-Warshall算法

 

 

 

 

 

递推方程

  1. 装配线问题

 

  1. 矩阵链乘法

 

  1. 最长公共子序列(LCS)

4、Floyd-Warshall算法

主要: 

 

 

5、最优二叉树

证明:

1.

2.

 

 

 

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值