Let's see an example to understand what is dynamic programming.
Alice is located in the North-West (Top-Left) block of her city, which has the form of a gird of NxN blocks. On each block there is a number of naughty kids. She wishes to reach the South-West (Bottom-Right) block while encountering the least amount of naughty kids as possible. She can only move to blocks South (Down) or East (Right) of her current block. Help Alice minimize the total number of naughty kids she encounters along the way.
TL;DR: On a 2D matrix where every cell has a value, minimize (or maximize) the cost of moving from the top-left cell to the bottom-right cell, while only being able to move down and right.
The above statement is a minimization problem. If the statement was talking about Alice's friends instead of naughty kids we would have a maximization problem and the solution that follows would still work - just by swapping the min function with max .
You can tell all the way "1" to the end from the very first sight. But how can you make machine tell the shortest path?
Since each cell can only be entered either from above or from the left, the optimal solution for arriving at a given cell is the minimum of the cost of entering from the left and the cost of entering from above. Also we need to take into account the given cell's cost as well. Note that cells on the first row and column can only be accessed from one direction only, thus the optimal cost of arriving at such a cell is its own cost plus the cost of the only possible way of entering it. Basically, we are using the solutions of previous subproblems to reach the global solution. That is the essence of Dynamic Programming.
Above explanation in fancy math notation where opt(i,j) denotes the optimal solution for the cell at row i and column j and c(i,j) denotes its cost value:
You can't tell the whole path but you can tell how we move each step. A very important and strong assumption for Dynamic Programming is "Local optima is the global optima".
public static int solve(int[][] c, int n) {
int[][] opt = new int[n][n];
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
if (i > 0 && j > 0) {
opt[i][j] = Math.min(opt[i-1][j], opt[i][j-1]) + c[i][j];
} else if (i > 0 && j == 0) {
opt[i][j] = opt[i-1][j] + c[i][j];
} else if (i == 0 && j > 0) {
opt[i][j] = opt[i][j-1] + c[i][j];
} else {
opt[i][j] = c[i][j];
}
}
}
return opt[n-1][n-1];
}
ok, after this, you are supposed to have an insight of DP. However, DP is more like an idea or thought more than like recursive or iterative as a method.
___
Let's look at this example. Very classic one. Longest increasing subsequence.
Objective: The longest Increasing Subsequence (LIS) problem is to find the length of the longest subsequence in a given array such that all elements of the subsequence are sorted in increasing order.
OR
Given a array A[1,2,.…..,n] , calculate B[1,2.…m] with B[i]<B[i+1] where i=1,2,3,.…,m such that m is maximum.
Example:
How do we solve this problem?Let's make this straight. For example 23, we can see its previous elements should be {1,7} or {12} or {0}. Thus, we should use the Max{prev}+1 to set it as 23's length for longest subsequence so far.
That is to say, given A[1...n], we can create a corresponding array, say res[], res[i] = longest[1..i-1]+1 if A[i] > A[1..i-1].
Every res[i] is always based on previous result. This is the dynamic programming.
public class longestIncreasingSub {
public static void main(String[] args){
longestIncreasingSub lis = new longestIncreasingSub();
int[] input = {10, 9, 2, 5, 3, 4, 7, 101, 18};
System.out.println(lis.lengthOfLIS(input));
}
public int lengthOfLIS(int[] nums) {
int[] res = new int[nums.length]; //里面存的是几步能走到这儿
Arrays.fill(res, 1);
for(int i=0;i<nums.length;i++){
for(int j=0;j<i;j++){
if(nums[i] > nums[j]) //如果i比之前的所有j都大,说明increasing了,那么就比较res[j]+1(因为还要加自己)和自己的值(初始1)比谁大
res[i] = Math.max(res[i],res[j]+1);
}
}
int max = 0;
for(int i = 0; i < nums.length; i++) {
max = Math.max(max, res[i]);
}
return max;
}
}
Donc...... frère du, tu comprends?
Merci bcp!