Skip to main content

Dynamic Programming

 Dynamic Programming is one of the most popular techniques in competitive programming and a powerful algorithm design technique. This handout aims to give a brief introduction to Dynamic Programming.

Optimization Algorithms
An algorithm is a set of steps to perform a certain procedure and reach a certain end goal. In particular, the problems we are going to be looking at in this handout are going to be optimization problems - problems that aim at minimising or maximising something, finding a shortest path, or trying to find the best way to do something.

Greedy Algorithms
Greedy algorithms are used to solve optimisation problems which try to maximise short term goals by making decisions that seem to be optimal in the current scenario but may not be ideal in the long run. 

Example 1(Activity Selection Problem) : On a certain day, there are $n$ events that you would like to attend. Each of these events has a start time $s_i$ and a finish time $f_i$. Some of these events may overlap and you are only allowed to attend events with no overlap. Provide an efficient algorithm that lets you maximise the number of events you attend (completely).

Walkthrough : Note that we haven't introduced the notion of an efficient algorithm. The idea is that if we allowed any algorithm, we could have just bruteforced through each of the $2^n$ possibilities and seen which works with maximum number of events. However this is exponential in the number of events and is therefore very inefficient. Instead, we need an algorithm that can at most require a number of steps that is polynomial in $n$.
  1. Work out why a greedy algorithm that picks the shortest event  followed by the next shortest event that can still be picked (and so on) does not work.
  2. Show that a greedy algorithm that tries to pick the event with minimum $s_i$ (that can be picked) at any point does not work.
  3. Prove that the previous idea with $s_i$ replaced with $f_i$ works.
Example 2(Weighted Activity Selection Problem) : This is the same as the previous problem, except now each event has a value $v_i$ and your aim instead is to maximise the sum of the values of the events you attended.

Walkthrough(or not?) : Show that the greedy algorithm in the previous example fails here.

Note that it is infact not possible to solve this problem with a greedy algorithm. To solve it, we are going to have to expand our tool kit and learn the technique of Dynamic Programming.

Dynamic Programming
Dynamic Programming is a very powerful algorithmic technique used to solve optimization problems which is seen a lot in computer science. It is essentially an improvement to simple recursion. It aims at solving problems by dividing them into smaller sub-problems and then combining results of various sub-problems to get the final result.
The technique was invented by Richard Bellman. Here's an interesting story on the origin of dynamic programming. 

Bellman explained that he invented the name dynamic programming to hide the fact that he was doing mathematical research. He was working at a place called RAND under a secretary who had a pathological fear and hatred for the term research.

So he settled on the name dynamic programming because it would be difficult to give a bad meaning to it. It was something nobody would object to.

Long story short, the name sounded cool.

Dynamic programming can also be thought of as a Careful Brute Force, which seems quite odd but we'll see soon as to why exactly this makes sense.
As an algorithm, dynamic programming usually runs through all possible outcomes of a certain event but does so in a careful way that avoids exponential time.

Lets do an example, which is not technically an optimization problem but illustrates the ideas of dynamic programming well.

Example : You are climbing a stair case. It takes n steps to reach to the top. Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top?

Walkthrough : This problem is quite easy, all we need to find is a pattern in the number of ways for increasing $n$.
  1. If you haven't already done this, figure out the number of ways for small $n$ until you can see what's happening.
  2. Show that the pattern you found is infact true.
If you solved the problem correctly, you would have noticed that there is a recurrence relation (the Fibonacci) using which we can figure out the required number of ways. Here is how the code for this might look : 

While this definitely works, there is a major problem in the code written above. The problem is that we end up performing the same computations a large number of times since we do not remember the values we had already calculated earlier.
Notice that $f(2)$ is being calculated once on the left and once on the right - which slows down our program a lot.
To avoid this we use the technique of memoization. The idea is that we remember the answers to sub-problems  we have already solved rather than re-calculating them. This brings in the \emph{careful} nature of the brute-force.

A solution using memoization for the above problem may look something like the below:
To master the skill of dynamic programming, there are 5 key steps that we should remember
  1. Define the subproblems : In the above problem subproblems are essentially climbing a smaller number of stairs.
  2. Guess your first move : This is important! When we do not know how to start, we should try every possibility. In the above problem, we did not know whether the first move was a single step or two and we therefore consider both cases and build a recurrence on that guess.
  3. Find the recurrence relation : a relation between different subproblems. This is essentially what allows us to solve the problem. In the above example we had $$f(n) = f(n-1) + f(n-2)$$ And it is this recurrence that allows us to calculate the number of required ways.
  4. Recurse and memoize : We write the code and remember to remember!
  5. Solve original problem : At the end we calculate the function for the particular value of variables that we were desiring
Let us now discuss 2 more cool problems on dynamic programming and see how each of our key steps applies to solving the problem.

Example : Let's say you are given a rod of length $n$ metres. The rod is quite valuable and you would like to sell it in the market. Different lengths of the rod can be sold for different prices and you would like to maximise your earning. To do this, you cut your rod into multiple (possibly just one) parts and sell each of them in the market. Find an efficient algorithm that gives you the maximum earning you can achieve by selling the rod.

To be more precise, you are given an array of $n$ integers, representing the price of an $i$ metre rod in the market with $0<i\leq n$. Determine the maximum value obtainable by cutting up a rod of length $n$ metres and selling the pieces.

Walkthrough : 
  1. What seem to be the natural subproblems for the given problem?
  2. We do not know where to first cut the rod - it could be anywhere, right? Make a guess on what we should first guess.
  3. If your guess and choice of subproblems was correct, you should be quite close. You just want to see which of the subproblems you ended up at after your initial guess is optimal. In a sense you're looking to find the maximum of a certain set of values.
If you like to code, you can now try to code this up!

Solution : Let the value of a rod of length $k$ metres be $a_k$ and $f(k)$ denote the best possible price obtainable by selling a rod of length $k$ metres. Our recurrence is $$f(n) = \text{max}_{i=1}^{n} (a_i + f(n-i))$$

Example : Given a grid of size $m\times n$, let us assume you are starting at $(1, 1)$ and your goal is to reach $(m, n)$. At any instance, if you are on $(x, y)$, you can either go to $(x, y + 1)$ or $(x + 1, y)$. Now consider if some obstacles are added to the grids. How many unique paths would there be?

Who said dynamic programming is limited to just one dimension? Here we use our DP approach with 2 different parameters. 

Walkthrough : 
  1. We ask the same questions again, what seem to be the natural subproblems for the given problem?
  2. We do not know what the last move was - we could have come from the left or from below, right? Make a guess on what we should last guess.
  3. The problem is now somewhat similar to original $n$ stairs problem we had tried. The only problem is that we have a couple of obstacles in our way. Figure out a neat way to add up the cases of each possibility of our guess while keeping in mind that the cell we're currently considering may be blocked.
Again, if you like to code, you can now try to code this up!

Solution : Let $f(a,b)$ denote the number of ways to reach cell $(a,b)$ from $(1,1)$. Our recurrence is
$$f(a,b) = \begin{cases}f(a-1,b) + f(a,b-1) & \text{if (a,b) is not blocked} \\ 0 & \text{otherwise} \end{cases}$$ 

Problem 1 : Try solving Example 1.2 with the technique of dynamic programming

Problem 2 : The Longest Increasing Subsequence problem is to find the length of the longest subsequence of a given sequence such that all elements of the subsequence are sorted in increasing order. For example, the length of LIS for $\{10, 22, 9, 33, 21, 50, 41, 60, 80\}$ is $6$ and the LIS is $\{10, 22, 33, 50, 60, 80\}$. 

Problem 3 : The Longest Common Subsequence (LCS) problem is finding the longest subsequence present in given two sequences in the same order, i.e., find the longest sequence which can be obtained from the first original sequence by deleting some items and from the second original sequence by deleting other items.

Post by Rushil Mathur.

About the guest blogger: Rushil is an 11th grader from Mumbai. He’s been doing olympiad math for over 3 years now and he absolutely loves combinatorics. Apart from maths, he likes competitive programming and is always up for a chess match. 
Rushil is a 2022 INMO Awardee and a part of the Sophie Fellowship program.  He also won a bronze at the INOI 2022.

Post set up and modified by Pranav Choudhary.


Popular posts from this blog

EGMO solutions, motivations and reviews ft. Atul, Pranjal and Abhay

The  European Girls' Mathematical Olympiad a.k.a EGMO 2022 just ended. Congrats to Jessica Wan from USA, Taisiia Korotchenko, and Galiia Sharafetdinova for the perfect scores! Moreover, the Indian girls brought home 4 bronze medals! By far, this is the best result the EGMO India Team has ever achieved! To celebrate the brilliant result, here's a compilation of EGMO 2022 solutions and motivations written by my and everyone's favorite IMOTCer Atul ! And along with that, we also have reviews of each problem written by everyone's favorite senior, Pranjal !  These solutions were actually found by Atul, Pranjal,  and Abhay  during the 3-hour live solve. In the live solve, they solved all the 6 problems in 3 hours 😍!!! Okie Dokie, I think we should get started with the problems! Enjoy! Problem 1:  Let $ABC$ be an acute-angled triangle in which $BC<AB$ and $BC<CA$. Let point $P$ lie on segment $AB$ and point $Q$ lie on segment $AC$ such that $P \neq B$, $Q \neq C$ and

Kőnig-Egerváry theorem

Graph theory has been my most favourite thing to learn in maths and and in this blog i hope to spread the knowledge about Kőnig's theorem. It is advised that the readers are aware about basic graph theory terminologies including bipartite graphs. Before going on to the theorem i would like to go on about matchings and vertex cover which we are going to use in the theorem  Matchings A matching $M$ of a graph $G$ is a subset of the edges of a graph in which no two edge share a common vertex (non adjacent edges). For example :- The Matching $M$ over here is edge $\{ 3 - 5 , 1-2 , \}$ or $\{ 1 - 2 , 4 - 3 \}$ etc .  Maximum Matching is a matching that contains the largest possible number of edges for instance in the above example the maximum matching is 2 edges as there cannot be a subset of non adjacent edges having greater than 2 edges (Readers are advised to try so they can convince themselves) A Perfect Matching  is a matching that matches all vertices of the graph or in other sen

Algorithms, or Mathematics?!

Hi everyone! In my last blog, I spoke about a couple of very interesting algorithmic techniques and how they can be used to solve a variety of very interesting problems. This blog however, is going to be completely different.   When we’re young we begin by learning the steps to add – we’re given the rules and we must learn to repeat them – no questions asked. Why does carry forward work the way it does? Well, no questions asked. To some extent, it is important to know how exactly to do a certain set of things while first learning maths. We may not be able to get anywhere in a subject if we’re unable to learn a few basic rules and know how to use them. However, after a certain point it is important to bring in the spirit of mathematical thinking within each student too – something missing in almost every form of school math education. Mathematical miseducation is so common, we wouldn’t even see it. We practically expect a math class to look like repetition and memorisation of disjointed