Dynamic Programming

References:

What is dynamic programming

Dynamic programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize it using dynamic programming. The idea is to simply store the results of subproblems, so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial. For example, if we write simple recursive solution for Fibonacci Numbers, we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear.

// recursion: exponential
int fib(int n){
    if (n <= 1)
      return n;
    return fib(n-1) + fib(n-2);
}

// dynamic programming: linear

int fib(int n){
    int f[n];
    f[0] = 0;
    f[1] = 1;
    for (int i = 2; i <= n; i++){
        f[i] = f[i-1] + f[i-2];
    }
    return f[n];
}

Solve DP problems

Steps to solve a DP:

1) Identify if it is a DP problem. 2) Decide a state expression with least parameters 3) Formulate state relationship 4) Do tabulation (or add memoization)

1. Identify a dynamic programming problem

2. Deciding the state

DP problems are all about state and their transition.

The state transition depends on the choice of state definition you make.

A state can be defined as the set of parameters that can uniquely identify a certain position or standing in the given problem. This set of parameters should be as small as possible to reduce state space.

For example, in Knapsack problem, we define our state by two parameters index and weight, i.e. DP[index][weight]. Here DP[][] tells us the maximum profit it can make by taking items from range 0 to index having the capacity of sack to be weight. Therefore, here the parameters index and weight together can uniquely identify a subproblem for the knapsack problem.

DP is all about using calculated results to formulate the final result. So, our next step will be to find a relation between previous states to reach the current state.

3. Formulating a relation among the states

4. Adding memoization or tabulation for the state

Tabulation vs Memoization

Tabulation(Bottom up)

Memoization(Top down)

Tabulation – Bottom Up Dynamic Programming

DP problem: find dp[n] by computing from dp[0] to dp[n]

// tabulation version to find factorial x
// state transition: dp[x+1] = dp[x] * (x+1)

int dp[MAXN];

// base case
int dp[0] = 1;
for (int i = 1; i <= n; i++){
    dp[i] = dp[i-1] * i;
}

From above example, the dp table is being populated sequentially and we are directly accessing the calculated states from the table itself and hence,we call it tabulation method.

Memoization – Top down dynamic programming

To find dp[n], start from dp[n] then until reach dp[0]

// Memoized version to find factorial x.
// to speed up we store the values of calculated states

// initialized to -1
int dp[MAXN];

// return fact x
int solve(int x){
    if (x == 0){
        return 1;
    }
    if (dp[x]!= -1){
        return dp[x];
    }
    return (dp[x] = x * solve(x - 1));
}

As we can see, we are storing the most recent cache up to a limit so that if next time we got a call from the same state se simply return it from the memory. So, this is why we call it memoization as we are storing the most recent state values.

A list of DP problems

References:

More

Created Jun 8, 2021 // Last Updated Jun 16, 2021

If you could revise
the fundmental principles of
computer system design
to improve security...

... what would you change?