算法小论——第二章 人间大炮一级准备

笔记

这一章的主要目的在于建立一个范式。简单点讲,就是算法这玩意儿该怎么玩。例子主要就两个,插入排序和归并排序,但玩出了很多花活儿:

伪代码,帮助大家快速的写下一个思考运行的过程,而不需要借助于具体的语言和语法。

循环不变量,这是西方人写书的一大特点,就是不但写下了为什么要这么做,还要严密的论证这么做的正确性。这点的确是我在以往思考过程中缺乏的,往往用一种模糊的,大而化之的想法来做事。没有事后把土压实了,那么台子也就很难筑高。不过,,,话说回来,这么想真的挺累的,哈哈……

算法效率分析,这里的分析首先是从现代计算机的模型出发,和我看过的另外一本书深入理解计算机系统就接起来了,找到组织了,泪流满面啊~ 这里对插入排序做了一个详尽的分析,然后算出一个表达式,最后把表达式变为order of growth,由繁而简,抓入主要矛盾。大赞。

分治法,由归并排序,引出了设计算法的一大理念,就是分治法。分而治之,最后再合起来。从哲学理念上来说和微积分是一脉相承的。

递归表达式,在计算归并排序的效率时,第一发明就是T(n)的递归表达式,由递归表达式加上递归树的理念,得出了T(n) = Θ(nlgn)。


习题答案

2.1-1

Using Figure 2.2 as a model, illustrate the operation of INSERTION-SORTon the array A = <31, 41, 59, 26, 41, 58>.

31, 41, 59, 26, 41, 58
31, 41, 59, 26, 41, 58
26, 31, 41, 59, 41, 58
26, 31, 41, 41, 59, 58
26, 31, 41, 41, 58, 59

就这么傻傻的排呗。

2.1-2

Rewrite the INSERTION-SORT procedure to sort into nonincreasing instead of nondecreasing order.

INSERTION-SORT(A)
	for j = 2 to A.length
		key = A[j]
			//Insert A[j] into the sorted sequence A[1..j-1].
		i = j - 1
		while i > 0 and A[i] < key
			A[i+1] = A[i]
			i = i - 1
			A[i+1] = key

主要就是第6行的 A[i] < key 啦。

2.1-3

Consider the searching problem:

Input: A sequence of numbers A = <a1,a2,...,an> and a value v.

Output: An index i such that v = A[i] or the special value NIL if v does not appear in A.

Write pseudocode for linear search, which scans through the sequence, looking for v. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.

for i = 1 to n
	if A[i] == v
		return i
return NIL

循环不变量: 从 0 到 i-1 , 始终保持如下关系 A[i] != v。或者说,在循环过程中一直保持不变的是 A[0]...A[i-1] 都不等于v,或者用文艺一点的说法是 v !∈ {A[0]...A[i-1]}。

2.1-4

Consider the problem of adding twon-bit binary integers, stored in twon-element arrays A and B. The sum of the two integers should be stored in binary form in an (n+1)-element arrayC. State the problem formally and write pseudocode for adding the two integers.

Add(A[], B[])
	for i = 1 to n + 1
		C[i] = 0
	for i = 1 to n
		C[i] += (A[i] + B[i])
		if C[i] == 2
			C[i+1] = 1
			C[i] = 0
		if C[i] == 3
			C[i+1] = 1
			C[i] = 1

真·奥义·小学加法·之进位法!感谢我的小学数学老师!

2.2-1

Express the function n^3/1000 - 100n^2 - 100n + 3 in terms of Θ-notation.

Θ(n^3)

在markdown里面有没有什么好的办法表达上标?每次这么写n的3次方有点别扭

2.2-2

Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element inA(1). Then find the second smallest element of A, and exchange it withA(2). Continue in this manner for the first n-1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n-1 elements, rather than for allnelements? Give the best-case and worst-case running times of selection sort in Θ-notation.

Sort(A[])
	for i = 1 to A.len - 1			C1 = n-1
		min = -∞						C2 = C1
		for j = i + 1 to A.len		C3 = ∑(n-i) {i=1 to n-1}
			if A[j] < min			C4 = C3
				min = A[j]			C5 = ∑(n-i)*Ti {i=1 to n-1}
							//Ti则看情况,如果满足条件了就算1,没满足就算0
				k = j				C6 = C5  
		exchange(A[i], A[k])		C7 = n-1 

其实不用管那么多,看着有2层循环,直接就能得出这个算法的运行效率是Θ(n^2),不管里面的if是怎么搞的,最好和最坏都是Θ(n^2),这是身为物理学家的直觉,哇哈哈。

2.2-3

Consider linear search again (see Exercise 2.1-3). How many elements of the in put sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in Θ-notation? Justify your answers.

平均来说是n/2,根据概率来说是一半的可能性在前一半就找到了。

最差的情况是n。

不管哪种都是Θ(n)。

证明……这个可能要用点概率论的知识吧,我忘的差不多了 -__-

2.2-4

How can we modify almost any algorithm to have a good best-case running time?

这个问题的无耻程度有我当年的水准啊。答案是:

修改输入!

就像有一集的星际迷航的里面,库克船长利用bug,通过一个模拟测试,避免了两难的道德困境。作弊作的理直气壮,心态平和,这就是英雄和枭雄所为吧!

2.3-1

Using Figure 2.4 as a model, illustrate the operation of merge sort on the array A = <3, 41, 52, 26, 38, 57, 9, 49>.

太不好玩了,不搞,忽略!

2.3-2

Rewrite the MERGE procedure so that it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A.

Merge(A[], p, q, r)
	m = q - p + 1
	n = r - q
	L[1..m] = A[p..q]
	R[1..n] = A[q+1..r]
	// 复制到两个数组中

	i = j = 1
	for k = p to r
		if i == m + 1   // L用完了,拷贝R
			A[k..r] = R[j..n]
			break
		if j == n + 1   // R用完了,拷贝L
			A[k..r] = L[i..m]
			break

		if L[i] <= R[j]
			A[k] = L[i]
			i++
		else
			A[k] = R[j]
			j++

这个写法还是比较简洁明了的吧,哈哈哈

2.3-3

Use mathematical induction to show that when n is an exact power of 2,the solution of the recurrence

T(n) = / 2				if n = 2
	   \ 2T(n/2) + n 	if n = 2^k, for k > 1

is

T(n) = nlgn

数学归纳法真是好遥远的回忆啊,就像那18岁的花季雨季……泛酸中……

假设

T(k) = klgk 

T(2k) = 2T(k) + 2k
      = 2klgk + 2k
	  = 2k(lgk + lg2) - 2klg2 + 2k
	  = 2klg2k - 2klg2 + 2k
	  = 2klg2k + 2k*0.7
	  = Θ(2klg2k)

2.3-4

We can express insertion sort as a recursive procedure as follows. In order to sort A[1..n], we recursively sort A[1..n-1] and then insert A[n] into the sorted array A[1, n-1]. Write a recurrence for the running time of this recursive version of insertion sort.

Sort(A[], i)
	if i == 1
		return
	Sort(A[], i-1)
	k = A[i]
	for j = i-1 to 1
		if A[j] > k
			A[j+1] = A[j]
		else
			break
	A[j] = k

时间上和循环的版本在同一个数量级,还增加了内存栈的消耗。

2.3-5

Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against and eliminate half of the sequence from further consideration. The binary search algorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is Θ(lgn).

BinarySearch(A[], v)
	i = 1
	j = n
	while i < j
		k = (i+j)/2 」(取下整)
		if v > A[k]
			i = k
			continue
		else if v < A[i]
			j = k
			continue
		else
			return i
	return NIL

没有太仔细的检查边界条件,估计就是这样吧。运行时间的证明主要是每次循环搜索的数组长度都是上一次的1/2,最后到1为止,则循环运行的次数为 2^m < n, m < lgn, 则运行时间为Θ(lgn)。

2.3-6

Observe that thewhileloop of lines 5–7 of the INSERTION-SORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray A[1..j-1] Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to Θ(nlgn)?

借用奥观海同志的话,Yes, We Can!

那为什么Can呢?先写一下伪代码

INSERTION-SORT(A)
	for j = 2 to A.length
		key = A[j]
		i = BinarySearch(A[1..j-1], key)    C1 = ∑lgj {j=2 to n}
			//这里有个边界条件,就是i为NIL的情况下,key是大于数组中所有的数呢?还是小于呢?
		for k = j to i+1					C2 = ∑(j/2) {j=2 to n}
			A[k] = A[k-1]
		A[i] = k

对于这里的2个关键步骤来说 C1 = lg2 + lg3 ... lgnC2 = n(n-1)/4

好吧,自行打脸,寻找插入点的步骤还是能省一些时间的,但是在移动的时候不行,除非有机器上整块的移动内存消耗的时间为O(1)。换个角度,把数组替换为链表,在插入的时候消耗时间为O(1),但BinarySearch确只能退化成遍历……

所以,目前的结论是插入排序无法优化成Θ(nlgn)。

2.3-7 *

Describe a Θ(nlgn)-time algorithm that, given a set S of n integers and another integer x, determines whether or not there exist two elements in S whose sum is exactly x.

简单的思路就是,先排序,然后选择<x,>(x/2)的元素,然后用x减去这个元素,然后再BinarySearch这个元素是否存在。

总运行时间 = 排序消耗Θ(nlgn) + 遍历(n)*BinarySearch(lgn)
		  = Θ(nlgn)

问题答案

2-1 Insertion sort on small arrays in merge sort

Although merge sort runs in Θ(nlgn) worst-case time and insertion sort runs in Θ(n^2) worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which n/k sublists of length k are sorted using insertion sort and then merged using the standard merging mechanism, where k is a value to be determined.

a. Show that insertion sort can sort the n/k sublists, each of length k,in Θ(nk) worst-case time.

总运行时间 = n/k * Θ(k^2)
		  = Θ(nk)

b. Show how to merge the sublists in Θ(nlg(n/k)) worst-case time.

归并排序的递归树可以这样思考,每一层都有n个元素要移动,则

总时间 = n * 树的高度

在这个案例里面,树的底层是 n/k 个子列表,每一层的归并都会把列表的个数消灭一半,所以树的高度是lg(n/k)

总运行时间 = n * 树的高度
		  = n * lg(n/k)
		  = Θ(nlg(n/k))

c. Given that the modified algorithm runs in Θ(nk + nlg(n/k)) worst-case time, what is the largest value of k as a function of n for which the modified algorithm has the same running time as standard merge sort, in terms of Θ-notation?

这个问题其实是问,在什么时候,

Θ(nk + nlg(n/k)) >= Θ(nlgn)

如果 k = Θ(n),和n是同阶的,那么 Θ(nk + ...) 就会变成 Θ(n^2),这就违背了我们当时设计这个混合算法的初衷。在仔细的观察后,如果 k = lgn,那么刚好有相同的运行时间。

k = Θ(lgn)
Θ(nk + nlg(n/k)) = Θ(nlgn + nlg(n/lgn))
				 = Θ(nlgn)

d. How should we choose k in practice?

在具体的案例中,需要得到这个混合算法的精确表达式,类似于这样

运行时间 = C1*nk + C2*nlg(n/k)

然后在 k <= lgn 的范围内,求出这个函数的最小值所对应的k,需要对这个进行求导……blabla之类的,看来大学数学还真是有用啊……懒得求……如果我真的要去设计实现这么蛋疼的算法的时候再来求具体的值吧……

2-2 Correctness of bubblesort

Bubblesort is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order.

BUBBLESORT(A)
1	for i = 1 to A.length - 1
2		for j = A.length downto i + 1
3			if A[j]  < A[j+1]
4				exchange A[j] with A[j-1]

a. Let A` denote the output of BUBBLESORT(A). To prove that BUBBLESORT is correct, we need to prove that it terminates and that

A`[1] < A`[2] < ... < A`[n]		(2.3)

where n = A.length. In order to show that BUBBLESORT actually sorts, what else do we need to prove? The next two parts will prove inequality (2.3).

b. State precisely a loop invariant for the for loop in lines 2–4, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter.

c. Using the termination condition of the loop invariant proved in part (b), state a loop invariant for thefor loop in lines 1–4 that will allow you to prove inequality (2.3). Your proof should use the structure of the loop invariant proof presented in this chapter.

这其实是一个证明题……话说我当年最喜欢做几何证明题了,但搞着这个还是觉得头疼,静下心来……阿弥陀佛……无量天尊……哈利路亚……

OK,进入正题

对于第二层循环来说,它的作用是把从i+1到A.len中的所有元素反向撸一遍,然后把最小的元素推到A[i]这个位置上。

如下图示,当i=1的时候是这样

[ 3, 2, 5, 1, 9]
           ^  ^
[ 3, 2, 5, 1, 9]
        ^ ←^  
[ 3, 2, 1, 5, 9]
     ^ ←^     	
[ 3, 1, 2, 5, 9]
  ^ ←^
[ 1, 3, 2, 5, 9]

最后,得到的结果是什么呢?就是所有元素中最小的1被撸到了A[1]这个位置,后面的步骤就是继续撸出A[2], A[3]...等等,最后达到所有数字都排列正确的地步,善哉善哉

d. What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort?

从伪代码可以看出,冒泡排序的运行时间是Θ(n^2),和插入排序在一个数量级上。

2-3 Correctness of Horner’s rule

The following code fragment implements Horner’s rule for evaluating a polynomial

P(x) = ∑a[k] * x^k
	 = a[0] + x(a[1] + x(a[2] + .... x(a[n-1] + xa[n])..)

given the coefficients a[0], a[1]...a[n] and a value for x:

y = 0
for i = n downto 0
	y = a[i] + x * y

a. In terms of Θ-notation, what is the running time of this code fragment for Horner’s rule?

Θ(n),话说我真没想到一个多项式的乘法还有这么多花头

b. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner’s rule?

无非是一项项乘起来然后相加呗,最后的x是n次方的,再加上系数要计算n+1次乘法

运行时间 = 1 + 2 + ... n+1
		= Θ(n^2)

c. Consider the following loop invariant: At the start of each iteration of theforloop of lines 2–3,

y = ∑a[k+i+1]x^k {k = 0 to n-(i+1)}

**Interpret a summation with no terms as equaling 0. Following the structure of the loop invariant proof presented in this chapter, use this loop invariant to show that, at termination, **

y= ∑a[k]x^k

d. Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients a[0]...a[n].

懒得证……累觉不爱……这么明显的事情就不要浪费我美好的人生了吧~~

2-4 Inversions

Let A[1..n] be an array of n distinct numbers. If i<j and A[i] > A[j], then the pair (i,j) is called an inversion of A.

逆序对的出现,掀起了算法导论历史上第二章的新高潮

历史的车轮缓缓前进,压死了无数路上的小强……

话说,到底是哪个脑袋想出逆序对这种BT的想法呢?还把这个和排序联系起来呢?看着这个奇葩,我不禁陷入深深的思考中……

a. List the five inversions of the array <2,3,8,6,1>

<8,6> -- (3,4)
<6,1> -- (4,5)
<2,1> -- (1,5)
<3,1> -- (2,5)
<8,1> -- (3,5)

b. What array with elements from the set {1,2,...n} has the most inversions? How many does it have?

显然,直觉告诉我们,把所有的元素都颠倒过来,会产生最多的逆序对。

<n, n-1,...2,1>

以n开头的逆序对有n-1个,分别是<n, n-1> ... <n,1>

以n-1开头的逆序对有n-2个,分别是<n-1,n-2> ... <n-1,1>

总计会产生 (n-1) + (n-2) + ... 1 个逆序对,也就是 n(n-1)/2个

c. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer.

其实,我们寻找逆序对的过程和插入排序的过程是一样的(装B开始,为了想出这个我想了一个下午),都是先找某个元素,然后把这个元素往前面所有的元素中间插,插的次数等于前面元素大于此元素的步数。用上面那个例子来说。

<2,3,8,6,1>
 |2不用插,是第一个元素
<2,3,8,6,1>
   |3想插一下,但发现2比3小,也不能插
<2,3,8,6,1>
	 |8也是
<2,3,8,6,1>
	   |6可以往前面插1格,因为6小于8,发现 逆序对数 = 1
<2,3,6,8,1>	
	     |1可以往前面插4格,1是最小的,发现 逆序对数 = 1 + 4 = 5
<1,2,3,6,8>

这里可以看到,前面元素的顺序不影响逆序对的数量,恰恰等于插入排序时移动元素的步数。

d. Give an algorithm that determines the number of inversions in any permutation on n elements in Θ(nlgn) worst-case time. (Hint:Modify merge sort.)

c问题的答案恰恰是我们所需要的,做一个推论,不只是插入排序,任何一个排序算法的

移动元素步数=逆序对的数目

那么,结合归并排序的过程,有这样的伪代码

Merge-Inverse(A[], p, q, r)
	m = q - p + 1
	n = r - q
	L[1..m] = A[p..q]
	R[1..n] = A[q+1..r]
	// 复制到两个数组中

	i = j = 1
	x = 0
	for k = p to r
		if i == m + 1   // L用完了,拷贝R
			A[k..r] = R[j..n]
			break
		if j == n + 1   // R用完了,拷贝L
			A[k..r] = L[i..m]
			break

		if L[i] <= R[j]
			A[k] = L[i]
			i++
			//左边的小于右边,很好,没增加逆序对的数量
		else
			A[k] = R[j]
			j++
			// 右边的小于左边,嗯,这就要开始算左边剩的元素数量了
			x += (m - i + 1)
	return x

Merge-Sort-Inverse(A[],p,r)
	x = 0
	if p < r
		q = (p + r)/2 」
		x += Merge-Sort-Inverse(A,p,q)
		x += Merge-Sort-Inverse(A,q,r)
		x += Merge-Inverse(A,p,q,r)
		// 逆序对数量 = 左分支产生的数量 + 右分支产生的数量 + 归并过程中产生的数量
	return x

OK!Done!

对归并排序做了这么一个小小的修改,立刻把计算逆序对的运行时间从Θ(n^2)降低到Θ(nlgn),很好很强大。

那么,为什么能做到这一点呢?

我想,原因一是归并排序削去了插入排序中多余的比较部分Θ(n^2),只留下了最小的比较+移动部分Θ(nlgn)。

原因二,就是在这个例子中,最聪明的一步是转化了问题,把一个寻找数量的问题化为一个排序的问题,可以说是神来之笔,我的脑子还远远不能到达这个地步,可敬可叹!

转载于:https://my.oschina.net/HardySimpson/blog/287338

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值