The name given to a large family of optimization algorithms, invented by R. E. Bellman. When searching a very large solution space, it is sometimes possible to transform exponential running time into polynomial running time by caching previous results which are optimal for a smaller instance of the problem.

For instance, searching a graph with edges labelled by their lengths for a shortest path between two vertices can use this technique. Define the k'th distance matrix Mk to have in row i, column j the length of the shortest path from vertex i to vertex j taking not more than k steps (if it is not possible to get from i to j in k steps, the value is infinity).

Then M1 is simply the graph's adjacency matrix. Given Mk we can readily calculate Mk+1: for each vertex l adjacent to j, Mk gives the length L of the shortest path from i to l taking k steps, so we can get from i to j in k+1 steps with distance L+d(l,j), where d(l,j) is the length of the edge from l to j. Minimise this over all l's to get entry <i,j> of Mk+1}; do this for all i and j to get Mk+1. The distance from some vertex A to another vertex B is simply entry <A,B> of Mn, where n is the number of vertices in the graph.

Computer science makes up new names for everything. A huge number of special cases of the fundamental dynamic programming algorithms have been "rediscovered" over the years. Almost all are now known after their rediscoverers. For instance, edit distance, the Needleman-Wunsch algorithm, the Smith-Waterman algorithm, and the Viterbi algorithm were all known to Bellman. He just invented too much actually to get the credit he deserved.

Dynamic programming is a variation on recursive programming if the recursion is of such a nature that some calls would occur more than once. For instance
               1 if n=0 or n=1
fibonacci(n)={                                         }
               fibonacci(n-1)+fibonacci(n-2) otherwise
will cause two calls two fibonacci(n-2), three to fibonacci(n-3) and lots to fibonacci(1) if n is 1000. This program will therefore run very slow without dynamic programming.

There are actually 3 types of dynamic programming (DP):


Memoization

The principle of memoization is to code a standard recursive routine and make it store and recover results as needed. Example (c-like pseudocode):

int myResults[max_x][max_y];
int myFunc (int x, int y){
  if ( (x,y) == special_case)
    return result_to_special_case;
  if (myResults[x][y] == empty){
    ..do recursive stuff..
    myResults[x][y]=result;
  }
  return myResults[x][y];
}
Unless '..do recursive stuff..' causes myFunc to be called with the same parameters again, the running time will be linear in both x and y, while the recursive function without memoization could often take exponential time.

An advantage with this method is that you can usually start anywhere you want as long as you are sure that (x,y) will never exceed the array bounds. Also, if you have a decent map data structure this can be used on basicly any input type. In c++ you can for instance create a map<input_type,output_type> myResults; and access this as an array without worrying about boundaries. As a side benefit, if your function may be called with parameters as large as a billion while most numbers will never be used as parameters at all, this type of DP will save a lot of time and memory compared to pure DP. On the downside, using a map structure you will however add a logarithmic part to the running time.


Pure DP:

In the normal implementation of DP you begin by filling up an array with initial conditions and then loop through all the elements in such an order results are always available when you need them. Example:

int myResults[max_x][max_y];
myResults[0][0...max_y-1]=0;
myResults[0...max_x-1][0]=0;
for (int x=1;x<max_x;x++){
  for (int y=1;y<max_y;y++){
    if (someFunc(x,y))
      myResults[x][y]=myResults[x-1][y-1];
    else
      myResults[x][y]=min(myResults[x][y-1],myResults[x-1][y])+1;
  }
}
(This is actually a simple string difference routine). When this code is done running, myResults will have the required results.
This method is usually faster then memoization and doesn't risk blowing the stack. On the downside, it is perhaps not as intuitive as recursive memoization, looks less like our original recursive fuction, and requires that we know what data we will need to calculate first and in what order we may expand. Furthermore, using this routine on other data types than integers will usually not work. Additionally, it may calculate results to subproblems that are not necessary to sove the original problem.

Iterative DP:

Iterative DP is much like pure DP except that you don't store data that you no longer needs. An iterative solution to the fibonacci problem would be

fibonacci(int n)
{
  int fib_prev=1;
  int fib_prev_prev=1;
  for (int j=1;j<n;j++){
    int fib_new = fib_prev + fib_prev_prev;
    fib_prev_prev = fib_prev;
    fib_prev = fib_new;
  }
  return fib_prev;
}
Note: For an even faster iterative algorithm to this problem, read Compute Fibonacci numbers FAST!
Here we only keep the two last fibonacci numbers, as they are all we need to calculate furter. The advantage is clearly in memory usage. However, in most cases. it is not as easy to use as the two other methods.

In some cases, you can save most of the memory in a pure DP solution by combining it with iterative DP. In our pure DP example we could for instance have used only two columns: the one we are working on and the previous one. After we have evaluated a column, we make that one become the previous one and goes on to calculate the next.

Conclusion:

When you have a recursive relation that takes exponential time, always see if you can find a way to use dynamic programming on it to make running time linear. Make sure that you use the correct version of DP for the given problem. Make sure you have the inital contitions correct. Use an ierative solution if you need to save memory. Use memoization with a map if it will save running time (or much needed coding time).

Log in or register to write something here or to contact authors.