dynamic programming

dynamic programming summarization:

1. dynamic programming is suitable for solving optimization problem with two important ingredients: optimal substructure and overlapping sub problems.


2. optimal substructure means the solution to the original problem can be combined by the solution to the sub problems. e.g., in assembly-line problem, the best way to the station S1j depends on the best path to the Station S1j-1 or S0j-1. in matrix multiplication problem, the best parenthesized method for A1-An depends on the solution to two sub problems: A1-Ak, Ak+1-An. and in the LCS, LCS[i,j] depends on the LCS[i-1,j-1] or (LCS[i-1,j] and LCS[i,j-1]). we cannot solve the problem directly and we should solve the subproblems first and then combine the solutions to the sub problems and then retrieve the final solutions.

3. another characteristic of dynamic programming is overlapping sub problems. that is the same sub problem will be referred by the different problems for several times. for example, in assembly-line problem, we need find the best path to S1-j, then we should know the best path to S1,j-1 and the best path to S0,j-1. and we also wanna know the best path to S0,j, then we should know the best path to S0,j-1 and S1,j-1. then we see that the same sub problems S1,j-1 and S0,j-1 are referred to by two different problems S1,j and S0,j. and this is the overlapping sub problems. if we solve this kind of problem with divide and conquer method then we should solve these sub problems referred by different problems for many times, which could cause the time complexity of solution to the problem to be exponential grade. **So in the dynamic programming , we will solve the problem in a bottom up method and make sure each sub problem will be computed only once. and of course we usually we one method called memoization, which will record the solution to the sub problem in a table so that we do not have to calculate it again if we encounter it again and we just need to get it from the table.

4. The procedure to solve problem with dynamic programming
A.  First of all, we need find the optimal substructure of the problem. usually when we analyzing the problem we will use brute force method. and if we find there are lots of common sub problem then we might think of using dynamic programming. for example, in assembly-line problem, if we use brute force method then we can k stations in the first line and n-k stations from the second line and we can 2^n choices and the time complexity will be O(2^n) which is huge and unacceptable so we can think why this method will cause a big time complexity. if we choose station 1-n-1 in the first line and station n in the second line we can get a solution. and if we choose station 1-n-2 and station n-1,n in the second line then we can get another solution, but we find that in this two solutions, they have the common sub problem: the path for 1-n-2 and the sub problem is calculated twice, then it is the overlapping sub problem and we should think whether we can use dynamic programming to solve the problem.

B. If we can find there is overlapping sub problem then we should check whether the problem has optimal substructure. this means the optimal solution to the problem can be combined by the optimal solutions to the sub problems. for example, in assembly-line problem, the best path to the station S1,j can be retrieved by the best path to the station S1,j-1 or S0,j-1, which is an optimal substructure. in matrix multiplication, the optimal solution to the problem A1-An can be combined by the optimal solution to the sub problem A1-Ak and Ak+1-An. so we need to find the optimal substructure. Usually before finding the optimal substructure we have to analyze the sub problem space, that means we should get a clear view of how many distinctive sub problems we have and usually the time complexity and space complexity of the problem has the same order of magnitude with size of sub problem space. e.g., time and space complexity of assembly-line is O(n) and sub problem space has 2n sub problems, which has same order of magnitude.

C. If we can find the optimal substructure then we can write the recursion formula, which describes the relationship between problem and sub problem and how many choices we have to combine the optimal solutions to the different sub problems to retrieve the optimal solution to the original problem. e,g,. in assembly-line problem, to get the best path to station S1,j then we have two choices, first one is to choose path from S1,j-1 and another is to choose the path from S0,j-1. two choices and we should choose the optimal one as the final path to the S1,j.  In matrix multiplication, to find the optimal solution to problem M[i,j], we have j-i choices and we can get the optimal solution from them to be the final solution to the problem.

D. After recursion formula is done, then we can write the code. if we use dynamic programming, we should calculate sub problem in a bottom up fashion. that is to solve the sub problems at the bottom level and sub problems at upper level above it and so on. the problem at the top level is the original problem we need solve.and if we use memoization method, we need solve the problem in a top-down fashion, which usually solve the problem with recursively. in this step we can get an optimal value to the problem.

E. if we need get a optimal solution, not only the optimal value of the problem then we need store more information in the step D. e.g., in assembly-line problem, if we just wanna to get the minimal time of the path to the station S1,j then we just need store the total time spend on the path. but if we wanna to get the path which has the minimal time to the station S1,j then we must store much more information. that is we must store the previous station of the current station in the path so that we can trace back the the first station of the line and get the whole path.
and in matrix multiplication problem, if we just wanna know the minimal times of multiplication of problem we just store the times of multiplication of each problem. but if we wanna know where we put the parenthesis in the matrix chain then we need store the index of matrix we put parenthesis in the chain. in a nutshell, i mean we should store much more information to build a optimal solution of the problem, not just the optimal value.

F. optimization. usually when using dynamic programming, we can optimize the time and space complexity of the solution slightly. we can reduce the constant factor of the complexity but we cannot reduce the asymptotic complexity.it must be analyzed according to different problem.

some notes:
1. Independence of sub problem: two sub problems of the original problem are independent in that one sub problem will not affect the solution to another sub problem.
2. overlapping sub problem: the same sub problem is referred by more that one problems, which means one sub problem is referred to many times by different problems.
3. key point of dynamic programming lies in that the sub problem is calculated only once with table storing the result of each sub problem, which usually can reduce the time complexity from order of exponential to order of polynomial.

update 2011.8.16
Assembly-line problem O(n) time. n problems and only two choices for each then totally O(n)
Matrix-multiplication O(n^3).  n^2 problems and at most n-1 choices for each then totally O(n^3).

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值