Skip to main content

Dynamic Programming

 Dynamic Programming is one of the most popular techniques in competitive programming and a powerful algorithm design technique. This handout aims to give a brief introduction to Dynamic Programming.

Optimization Algorithms
An algorithm is a set of steps to perform a certain procedure and reach a certain end goal. In particular, the problems we are going to be looking at in this handout are going to be optimization problems - problems that aim at minimising or maximising something, finding a shortest path, or trying to find the best way to do something.

Greedy Algorithms
Greedy algorithms are used to solve optimisation problems which try to maximise short term goals by making decisions that seem to be optimal in the current scenario but may not be ideal in the long run. 

Example 1(Activity Selection Problem) : On a certain day, there are $n$ events that you would like to attend. Each of these events has a start time $s_i$ and a finish time $f_i$. Some of these events may overlap and you are only allowed to attend events with no overlap. Provide an efficient algorithm that lets you maximise the number of events you attend (completely).

Walkthrough : Note that we haven't introduced the notion of an efficient algorithm. The idea is that if we allowed any algorithm, we could have just bruteforced through each of the $2^n$ possibilities and seen which works with maximum number of events. However this is exponential in the number of events and is therefore very inefficient. Instead, we need an algorithm that can at most require a number of steps that is polynomial in $n$.
  1. Work out why a greedy algorithm that picks the shortest event  followed by the next shortest event that can still be picked (and so on) does not work.
  2. Show that a greedy algorithm that tries to pick the event with minimum $s_i$ (that can be picked) at any point does not work.
  3. Prove that the previous idea with $s_i$ replaced with $f_i$ works.
Example 2(Weighted Activity Selection Problem) : This is the same as the previous problem, except now each event has a value $v_i$ and your aim instead is to maximise the sum of the values of the events you attended.

Walkthrough(or not?) : Show that the greedy algorithm in the previous example fails here.

Note that it is infact not possible to solve this problem with a greedy algorithm. To solve it, we are going to have to expand our tool kit and learn the technique of Dynamic Programming.

Dynamic Programming
Dynamic Programming is a very powerful algorithmic technique used to solve optimization problems which is seen a lot in computer science. It is essentially an improvement to simple recursion. It aims at solving problems by dividing them into smaller sub-problems and then combining results of various sub-problems to get the final result.
The technique was invented by Richard Bellman. Here's an interesting story on the origin of dynamic programming. 

Bellman explained that he invented the name dynamic programming to hide the fact that he was doing mathematical research. He was working at a place called RAND under a secretary who had a pathological fear and hatred for the term research.

So he settled on the name dynamic programming because it would be difficult to give a bad meaning to it. It was something nobody would object to.

Long story short, the name sounded cool.

Dynamic programming can also be thought of as a Careful Brute Force, which seems quite odd but we'll see soon as to why exactly this makes sense.
As an algorithm, dynamic programming usually runs through all possible outcomes of a certain event but does so in a careful way that avoids exponential time.

Lets do an example, which is not technically an optimization problem but illustrates the ideas of dynamic programming well.

Example : You are climbing a stair case. It takes n steps to reach to the top. Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?

Walkthrough : This problem is quite easy, all we need to find is a pattern in the number of ways for increasing $n$.
  1. If you haven't already done this, figure out the number of ways for small $n$ until you can see what's happening.
  2. Show that the pattern you found is infact true.
If you solved the problem correctly, you would have noticed that there is a recurrence relation (the Fibonacci) using which we can figure out the required number of ways. Here is how the code for this might look : 

While this definitely works, there is a major problem in the code written above. The problem is that we end up performing the same computations a large number of times since we do not remember the values we had already calculated earlier.
Notice that $f(2)$ is being calculated once on the left and once on the right - which slows down our program a lot.
To avoid this we use the technique of memoization. The idea is that we remember the answers to sub-problems  we have already solved rather than re-calculating them. This brings in the \emph{careful} nature of the brute-force.

A solution using memoization for the above problem may look something like the below:
To master the skill of dynamic programming, there are 5 key steps that we should remember
  1. Define the subproblems : In the above problem subproblems are essentially climbing a smaller number of stairs.
  2. Guess your first move : This is important! When we do not know how to start, we should try every possibility. In the above problem, we did not know whether the first move was a single step or two and we therefore consider both cases and build a recurrence on that guess.
  3. Find the recurrence relation : a relation between different subproblems. This is essentially what allows us to solve the problem. In the above example we had $$f(n) = f(n-1) + f(n-2)$$ And it is this recurrence that allows us to calculate the number of required ways.
  4. Recurse and memoize : We write the code and remember to remember!
  5. Solve original problem : At the end we calculate the function for the particular value of variables that we were desiring
Let us now discuss 2 more cool problems on dynamic programming and see how each of our key steps applies to solving the problem.

Example : Let's say you are given a rod of length $n$ metres. The rod is quite valuable and you would like to sell it in the market. Different lengths of the rod can be sold for different prices and you would like to maximise your earning. To do this, you cut your rod into multiple (possibly just one) parts and sell each of them in the market. Find an efficient algorithm that gives you the maximum earning you can achieve by selling the rod.

To be more precise, you are given an array of $n$ integers, representing the price of an $i$ metre rod in the market with $0<i\leq n$. Determine the maximum value obtainable by cutting up a rod of length $n$ metres and selling the pieces.

Walkthrough : 
  1. What seem to be the natural subproblems for the given problem?
  2. We do not know where to first cut the rod - it could be anywhere, right? Make a guess on what we should first guess.
  3. If your guess and choice of subproblems was correct, you should be quite close. You just want to see which of the subproblems you ended up at after your initial guess is optimal. In a sense you're looking to find the maximum of a certain set of values.
If you like to code, you can now try to code this up!

Solution : Let the value of a rod of length $k$ metres be $a_k$ and $f(k)$ denote the best possible price obtainable by selling a rod of length $k$ metres. Our recurrence is $$f(n) = \text{max}_{i=1}^{n} (a_i + f(n-i))$$


Example : Given a grid of size $m\times n$, let us assume you are starting at $(1, 1)$ and your goal is to reach $(m, n)$. At any instance, if you are on $(x, y)$, you can either go to $(x, y + 1)$ or $(x + 1, y)$. Now consider if some obstacles are added to the grids. How many unique paths would there be?

Who said dynamic programming is limited to just one dimension? Here we use our DP approach with 2 different parameters. 

Walkthrough : 
  1. We ask the same questions again, what seem to be the natural subproblems for the given problem?
  2. We do not know what the last move was - we could have come from the left or from below, right? Make a guess on what we should last guess.
  3. The problem is now somewhat similar to original $n$ stairs problem we had tried. The only problem is that we have a couple of obstacles in our way. Figure out a neat way to add up the cases of each possibility of our guess while keeping in mind that the cell we're currently considering may be blocked.
Again, if you like to code, you can now try to code this up!

Solution : Let $f(a,b)$ denote the number of ways to reach cell $(a,b)$ from $(1,1)$. Our recurrence is
$$f(a,b) = \begin{cases}f(a-1,b) + f(a,b-1) & \text{if (a,b) is not blocked} \\ 0 & \text{otherwise} \end{cases}$$ 

Problems
Problem 1 : Try solving Example 1.2 with the technique of dynamic programming

Problem 2 : The Longest Increasing Subsequence problem is to find the length of the longest subsequence of a given sequence such that all elements of the subsequence are sorted in increasing order. For example, the length of LIS for $\{10, 22, 9, 33, 21, 50, 41, 60, 80\}$ is $6$ and the LIS is $\{10, 22, 33, 50, 60, 80\}$. 

Problem 3 : The Longest Common Subsequence (LCS) problem is finding the longest subsequence present in given two sequences in the same order, i.e., find the longest sequence which can be obtained from the first original sequence by deleting some items and from the second original sequence by deleting other items.



Post by Rushil Mathur.

About the guest blogger: Rushil is an 11th grader from Mumbai. He’s been doing olympiad math for over 3 years now and he absolutely loves combinatorics. Apart from maths, he likes competitive programming and is always up for a chess match. 
Rushil is a 2022 INMO Awardee and a part of the Sophie Fellowship program.  He also won a bronze at the INOI 2022.


Post set up and modified by Pranav Choudhary.

Comments

Popular posts from this blog

The importance of "intuition" in geometry

Hii everyone! Today I will be discussing a few geometry problems in which once you "guess" or "claim" the important things, then the problem can easily be finished using not-so-fancy techniques (e.g. angle chasing, power-of-point etc. Sometimes you would want to use inversion or projective geometry but once you have figured out that some particular synthetic property should hold, the finish shouldn't be that non trivial) This post stresses more about intuition rather than being rigorous. When I did these problems myself, I used freehand diagrams (not geogebra or ruler/compass) because I feel that gives a lot more freedom to you. By freedom, I mean, the power to guess. To elaborate on this - Suppose you drew a perfect  diagram on paper using ruler and compass, then you would be too rigid on what is true in the diagram which you drew. But sometimes that might just be a coincidence. e.g. Let's say a question says $D$ is a random point on segment $BC$, so maybe

LMAO Revenge

Continuing the tradition of past years, our seniors at the Indian IMO camp(an unofficial one happened this year) once again conducted LMAO, essentially ELMO but Indian. Sadly, only those who were in the unofficial IMOTC conducted by Pranav, Atul, Sunaina, Gunjan and others could participate in that. We all were super excited for the problems but I ended up not really trying the problems because of school things and stuff yet I solved problem 1 or so did I think. Problem 1:  There is a   grid of real numbers. In a move, you can pick any real number  ,  and any row or column and replace every entry   in it with  .  Is it possible to reach any grid from any other by a finite sequence of such moves? It turned out that I fakesolved and oh my god I was so disgusted, no way this proof could be false and then when I was asked Atul, it turns out that even my answer was wrong and he didn't even read the proof, this made me even more angry and guess what? I was not alone, Krutarth too fakesol

Edge querying in graph theory

In this post, I will present three graph theory problems in increasing difficulty, each with a common theme that one would determine a property of an edge in a complete graph through repeated iterations, and seek to achieve a greater objective. ESPR Summer Program Application: Alice and Bob play the following game on a $K_n$ ($n\ge 3$): initially all edges are uncolored, and each turn, Alice chooses an uncolored edge then Bob chooses to color it red or blue. The game ends when any vertex is adjacent to $n-1$ red edges, or when every edge is colored; Bob wins if and only if both condition holds at that time. Devise a winning strategy for Bob. This is more of a warm-up to the post, since it has a different flavor from the other two problems, and isn't as demanding in terms of experience with combinatorics. However, do note that when this problem was first presented, applicants did not know the winner ahead of time; it would be difficult to believe that Bob can hold such a strong