Dynamic Programming is one of the most popular techniques in competitive programming and a powerful algorithm design technique. This handout aims to give a brief introduction to Dynamic Programming.

*Optimization Algorithms*An algorithm is a set of steps to perform a certain procedure and reach a certain end goal. In particular, the problems we are going to be looking at in this handout are going to be

*optimization*problems - problems that aim at minimising or maximising something, finding a shortest path, or trying to find the best way to do something.

*Greedy Algorithms*Greedy algorithms are used to solve optimisation problems which try to maximise short term goals by making decisions that seem to be optimal in the current scenario but may not be ideal in the long run.

**Example 1(**

**Activity Selection Problem) :**On a certain day, there are $n$ events that you would like to attend. Each of these events has a start time $s_i$ and a finish time $f_i$. Some of these events may overlap and you are only allowed to attend events with no overlap. Provide an efficient algorithm that lets you maximise the number of events you attend (completely).

**Walkthrough :**Note that we haven't introduced the notion of an

*efficient algorithm*. The idea is that if we allowed any algorithm, we could have just bruteforced through each of the $2^n$ possibilities and seen which works with maximum number of events. However this is exponential in the number of events and is therefore very inefficient. Instead, we need an algorithm that can at most require a number of steps that is polynomial in $n$.

- Work out why a greedy algorithm that picks the shortest event followed by the next shortest event that can still be picked (and so on) does not work.
- Show that a greedy algorithm that tries to pick the event with minimum $s_i$ (that can be picked) at any point does not work.
- Prove that the previous idea with $s_i$ replaced with $f_i$ works.

**Example 2(Weighted Activity Selection Problem) :**This is the same as the previous problem, except now each event has a value $v_i$ and your aim instead is to maximise the sum of the values of the events you attended.

**Walkthrough(or not?) :**Show that the greedy algorithm in the previous example fails here.

Note that it is infact not possible to solve this problem with a greedy algorithm. To solve it, we are going to have to expand our tool kit and learn the technique of

*Dynamic Programming.*

*Dynamic Programming*Dynamic Programming is a very powerful algorithmic technique used to solve optimization problems which is seen a lot in computer science. It is essentially an improvement to simple recursion. It aims at solving problems by dividing them into smaller sub-problems and then combining results of various sub-problems to get the final result.

The technique was invented by Richard Bellman. Here's an interesting story on the origin of dynamic programming.

Bellman explained that he invented the name dynamic programming to hide the fact that he was doing mathematical research. He was working at a place called RAND under a secretary who had a pathological fear and hatred for the term research.

So he settled on the name dynamic programming because it would be difficult to give a bad meaning to it. It was something nobody would object to.

Long story short, the name sounded cool.

Dynamic programming can also be thought of as a

*Careful Brute Force*, which seems quite odd but we'll see soon as to why exactly this makes sense.As an algorithm, dynamic programming usually runs through all possible outcomes of a certain event but does so in a careful way that avoids exponential time.

Lets do an example, which is not technically an optimization problem but illustrates the ideas of dynamic programming well.

**Example :**You are climbing a stair case. It takes n steps to reach to the top. Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?

**Walkthrough :**This problem is quite easy, all we need to find is a pattern in the number of ways for increasing $n$.

- If you haven't already done this, figure out the number of ways for small $n$ until you can see what's happening.
- Show that the pattern you found is infact true.

If you solved the problem correctly, you would have noticed that there is a recurrence relation (the Fibonacci) using which we can figure out the required number of ways. Here is how the code for this might look :

While this definitely works, there is a major problem in the code written above. The problem is that we end up performing the same computations a large number of times since we do not remember the values we had already calculated earlier.

Notice that $f(2)$ is being calculated once on the left and once on the right - which slows down our program a lot.

To avoid this we use the technique of memoization. The idea is that we remember the answers to sub-problems we have already solved rather than re-calculating them. This brings in the \emph{careful} nature of the brute-force.

A solution using memoization for the above problem may look something like the below:

To master the skill of dynamic programming, there are 5 key steps that we should remember

**Define the subproblems :**In the above problem subproblems are essentially climbing a smaller number of stairs.**Guess your first move :**This is important! When we do not know how to start, we should try every possibility. In the above problem, we did not know whether the first move was a single step or two and we therefore consider both cases and build a recurrence on that guess.**Find the recurrence relation :**a relation between different subproblems. This is essentially what allows us to solve the problem. In the above example we had $$f(n) = f(n-1) + f(n-2)$$ And it is this recurrence that allows us to calculate the number of required ways.**Recurse and memoize :**We write the code and remember to remember!**Solve original problem :**At the end we calculate the function for the particular value of variables that we were desiring

Let us now discuss 2 more cool problems on dynamic programming and see how each of our key steps applies to solving the problem.

**Example :**Let's say you are given a rod of length $n$ metres. The rod is quite valuable and you would like to sell it in the market. Different lengths of the rod can be sold for different prices and you would like to maximise your earning. To do this, you cut your rod into multiple (possibly just one) parts and sell each of them in the market. Find an efficient algorithm that gives you the maximum earning you can achieve by selling the rod.

To be more precise, you are given an array of $n$ integers, representing the price of an $i$ metre rod in the market with $0<i\leq n$. Determine the maximum value obtainable by cutting up a rod of length $n$ metres and selling the pieces.

**Walkthrough :**

- What seem to be the natural subproblems for the given problem?
- We do not know where to first cut the rod - it could be anywhere, right? Make a guess on what we should first guess.
- If your guess and choice of subproblems was correct, you should be quite close. You just want to see which of the subproblems you ended up at after your initial guess is optimal. In a sense you're looking to find the maximum of a certain set of values.

If you like to code, you can now try to code this up!

**Solution :**Let the value of a rod of length $k$ metres be $a_k$ and $f(k)$ denote the best possible price obtainable by selling a rod of length $k$ metres. Our recurrence is $$f(n) = \text{max}_{i=1}^{n} (a_i + f(n-i))$$

**Example :**Given a grid of size $m\times n$, let us assume you are starting at $(1, 1)$ and your goal is to reach $(m, n)$. At any instance, if you are on $(x, y)$, you can either go to $(x, y + 1)$ or $(x + 1, y)$. Now consider if some obstacles are added to the grids. How many unique paths would there be?

Who said dynamic programming is limited to just one dimension? Here we use our DP approach with 2 different parameters.

**Walkthrough :**

- We ask the same questions again, what seem to be the natural subproblems for the given problem?
- We do not know what the last move was - we could have come from the left or from below, right? Make a guess on what we should
*last*guess. - The problem is now somewhat similar to original $n$ stairs problem we had tried. The only problem is that we have a couple of obstacles in our way. Figure out a neat way to add up the cases of each possibility of our guess while keeping in mind that the cell we're currently considering may be blocked.

Again, if you like to code, you can now try to code this up!

**Solution :**Let $f(a,b)$ denote the number of ways to reach cell $(a,b)$ from $(1,1)$. Our recurrence is

$$f(a,b) = \begin{cases}f(a-1,b) + f(a,b-1) & \text{if (a,b) is not blocked} \\ 0 & \text{otherwise} \end{cases}$$

*Problems***Try solving Example 1.2 with the technique of dynamic programming**

__Problem 1 :__**The Longest Increasing Subsequence problem is to find the length of the longest subsequence of a given sequence such that all elements of the subsequence are sorted in increasing order. For example, the length of LIS for $\{10, 22, 9, 33, 21, 50, 41, 60, 80\}$ is $6$ and the LIS is $\{10, 22, 33, 50, 60, 80\}$.**

__Problem 2 :__**The Longest Common Subsequence (LCS) problem is finding the longest subsequence present in given two sequences in the same order, i.e., find the longest sequence which can be obtained from the first original sequence by deleting some items and from the second original sequence by deleting other items.**

__Problem 3 :__Post by

**Rushil Mathur**.**About the guest blogger:**Rushil is an 11th grader from Mumbai. He’s been doing olympiad math for over 3 years now and he absolutely loves combinatorics. Apart from maths, he likes competitive programming and is always up for a chess match.

Rushil is a 2022 INMO Awardee and a part of the Sophie Fellowship program. He also won a bronze at the INOI 2022.

Post set up and modified by

**Pranav Choudhary**.
## Comments

## Post a Comment