Skip to main content

Basic Combinatorial Ideas Part 1

Hiiiiiiiii everyone! Today I will be showing you a few combinatorics problems which are not very hard. Each one has a different idea. My aim is to introduce you to various ideas you can think of while trying combo problems and realize that combo is very fun!!!! 

Oki without wasting much time, let's begin :P

For those of you who want to try the problems before seeing the solution/ideas presented, here's a list of problems I have discussed below (note that I have modified a few questions, and some of them have answers to the original questions, so you can try reading the problems from AoPS instead if you're too worried on getting "spoiled" xD)

1) ISL 2009 C1 (a) 
2) USAMO 1997 P1
3) IMO 2011 P4
4) USAMO 2002 P1
5) (adapted from) Canada 2018 P3

Rather than just giving out the solution, I will try to motivate how you come up with such a step, so that it helps to develop intuition :D


Problem 1 [ISL 2009 C1 (a)] : Consider $2009$ cards, each having one gold side and one black side, lying on parallel on a long table. Initially all cards show their gold sides. Two player, standing by the same long side of the table, play a game with alternating moves. Each move consists of choosing a block of $50$ consecutive cards, the leftmost of which is showing gold, and turning them all over, so those which showed gold now show black and vice versa. The last player who can make a legal move wins. Does the game necessarily end?

Ideas : So the first thing you see is that you are talking about how cards are placed, like if a specific card is showing the gold side or the black side? Well, it's kinda hard to think of so many cards with different colors altogether so we think of some other way to represent it. Why do we really need cards? Since they ain't have a specific purpose here, so the answer is no. Instead, what we do is that we represent a card that is showing its gold side by $1$ and a card that is showing its black side by $0$.

So, initially, you have a sequence of $2009$ $1$'s, and in each move, you change $50$ consecutive numbers from $1$ to $0$ or $0$ to $1$ such that the first number was initially $1$. Moreover, since the only digits we are using are $0$ and $1$, at any given point in time, the sequence can't be negative.

Now if you read the question carefully(or the rephrased version), you notice a line that says 

The first number among the $50$ consecutive numbers should initially be $1$

But how does this help us? So basically in every move, you have some initial sequence of $1$'s and $0$'s starting with $1$, e.g. $$1011\ldots$$
and you flip all the numbers, so the above sequence becomes $$0100\ldots$$
So what you do is $$1011\ldots \mapsto 0100$$
Do you see something interesting? The second number is less than the first number! and this always holds true, because initially there was a $1$ in beginning and after we do a move, it's $0$. Now suppose this sequence was surrounded by some other numbers too, does this still hold true? The answer is YES! As a quick exercise, you can try to prove it yourself :P 

Okay so now let's accumulate all the information we have now:-
1) The initial sequence is $\underbrace{1 \ 1 \ \ldots \ 1}_{2019\text{ ones}}$
2) After every move, the sequence decreases.
3) The sequence is always non-negative

Hmm, this looks interesting. Ok so from points $2$ and $3$ we can see that a positive integer is constantly decreasing, but after a certain point of time, it can't decrease anymore, because if it continues to decrease infinitely it will become negative, which can't happen. And hence, the game must always end.

Remarks : The main idea of this problem was to note that a lower bounded sequence of integers can never decrease infinitely and that the gold and black-sided cards are just flavor text and you can instead use binary numbers to denote them. Rewriting the problem in different ways is a very important step in many questions.
You can generalize this problem for $n$ cards, where you flip $k$ cards in a given move, the idea is still the same.


Problem 2 [USAMO 1997 P1]  : Let $p_1, p_2, p_3, \ldots$ be the prime numbers listed in increasing order, and let $0 < x_0 < 1$ be a real number between 0 and 1.

\[x_k = \begin{cases} 0 & \mbox{if} \; x_{k-1} = 0, \\[.1in] {\displaystyle \left\{ \frac{p_k}{x_{k-1}} \right\}} & \mbox{if} \; x_{k-1} \neq 0, \end{cases}\]

where $\{x\}$ denotes the fractional part of $x$. Find, with proof, all $x_0$ satisfying $0 < x_0 < 1$
for which the sequence $x_0, x_1, x_2, \ldots$ eventually becomes $0$.

Ideas : In this problem, if $x_{i - 1} \ne 0$ and $x_i = 0$, it means that  $$\left\{ \frac{p_i}{x_{i-1}}\right\}  = 0$$ which is possible only if $x_{i - 1} \mid p_i \implies x_{i - 1} \in \{1, p_i\}$ for some $i$.

Now basically you try a few seemingly nice values of $x_0$ and make some conclusions. Well if you have enough patience and try out a few values you note that most of those work out... This is because when we think of picking a "nice" value, we most of the time pick a rational value, in a very rare case we will pick some irrational value of $x_0$. Well if you picked all rational values, then you might guess that all possible $x_0$ work, but then you should be careful! You didn't try irrational values.

So on the basis of our observations, we claim that all rational values for $x_0$ work, and irrational values don't. Let's prove that rational values work. We know that $x_0 \in (0,1)$, so it's very natural to consider $x_0 = \frac{a}{b}$, where $a,b \in \mathbb{Z}$ and $a < b$.
$$\implies x_1 = \left\{ \frac{2b}{a} \right\} $$
Now when we simplify the above expression for $x_1$ and write it in fractional form, we see that it will be of form $x_1 = \frac{t}{a}$, where $t < a$. Now we find $x_2$
$$x_2 = \left\{\frac{3a}{t}\right\}$$
So basically what happened was, the numerator of $x_i$ becomes the denominator of $x_{i+1}$, and since the numerator of $x_i$ is always less than the denominator of $x_i$ as $x_i \le 1$ by definition. And hence we get that denominators continuously decrease, or there's a point when the denominator divides the numerator. The first case can be solved using an idea that we used in Problem 1 !! Can you figure it out and finish it? For the second case, it's almost similar as well, sooo you can finish it as an exercise :P

Now we also have to prove that irrationals don't work, hmm. So for proving that, we use the above fact. If in the sequence, at any point of time a rational number occurs, it will just end up being $0$ as proved. So if an irrational value for $x_0$ doesn't work, it means that $\{x_n\}_{n \ge 0}$ should be a sequence of irrational numbers only. This is kind of easy to prove, so I recommend trying it on your own! But for sanity purposes, here's the idea:- 
If $x$ is irrational, then so is $\frac{1}{x}$ and now you should be able to finish by induction by using the fact that how $x_i$ is defined :P

Remarks : The main thing is to guess the answer correctly, and realize that putting $x_0 = \frac{a}{b}$ is nice. Tho at first glance it looks like you're adding an extra variable in some sense, well sometimes you gotta be strong and just work it out!


Problem 3 [IMO 2011 P4] : Let $n > 0$ be an integer. We are given a balance and $n$ weights of weight $2^0, 2^1, \cdots, 2^{n-1}$. We are to place each of the $n$ weights on the balance, one after another, in such a way that the right pan is never heavier than the left pan. At each step we choose one of the weights that has not yet been placed on the balance, and place it on either the left pan or the right pan, until all of the weights have been placed.
Determine the number of ways in which this can be done.

Ideas: At first glance, the most natural thing to try is to try for $n = 1,2,3$ and maybe other small values too. Well counting the number of ways directly for $n$ seems a bit vague right? So it gives vibes to find the number of ways for $n$ in terms of $n-1$ and maybe some other small values. To formalize this, we let $a_n$ be the number of required ways, and we are basically trying to find a recurrence in $a$. 

Now, we try to extract some information from the question. Note that the weights are in a geometric progression, there should be a reason behind it, right? It can't be just random, this is not the same as having the weights to be $1, 2, 3, 4, \ldots $. But what's special about a geometric series and total weight on each pan after placing $i$ blocks?
One interesting thing to note is that $$2^0 + 2^1 + \ldots + 2^k < 2^{k +1}$$ But now how is this useful? The thing is, whenever you place the heaviest block existing(it should always go on the left pan), all the other blocks after it can go anywhere, as the right pan will always remain lighter than the left pan.

Suppose we place the weight $2^{n - 1}$ after placing $i$ blocks. Then we first choose any $i$ blocks from the other $n - 1$ blocks, in $\binom{n - 1}{i}$ ways. Now, since those $i$ weights could be organized on the pan in $a_i$ ways. This is because of the property of the geometric progression I mentioned above. So which means there are $$\binom{n - 1}{i} \cdot a_i$$
ways to place $i$ blocks before placing the $2^{n -1 }$ block. Clearly, this block can only go to the left pan and there's only $1$ way to do so. Now the remaining $n - i - 1$ blocks can be placed on any of the pans in any order, so we first consider the number of orders which is just $(n - i - 1)!$ and since each block has $2$ options to go to, we get $$2^{n - i - 1}\cdot (n - i - 1)!$$ ways to put the remaining blocks. 
Combining, we get that if the block $2^{n - 1}$ is placed after $i$ blocks, the number of ways will be $$2^{n - i - 1} \cdot (n - i - 1)! \binom{n - 1}{i} \cdot a_i = 2^{n - i - 1} \cdot \frac{(n - 1)!}{i!}\cdot a_i$$
Now, notice that $i$ can be anything from $0$ to $n - 1$
$$\implies a_n = \sum_{i = 0}^{n - 1} \left(2^{n - i - 1} \cdot \frac{(n - 1)!}{i!}\cdot a_i\right)$$
Now this sum can be solved easily, but for the sake of completeness, here's the remaining solution :- 
$$\implies a_n = \sum_{i=0}^{n-2} \left(2^{n-i-1}\cdot \frac{(n-1)!}{i!}\cdot a_i\right) + a_{n-1}$$
$$\implies a_n = 2(n-1) \cdot\sum_{i=0}^{n-2} \left(2^{n-i-2}\cdot \frac{(n-2)!}{i!}\cdot a_i\right) + a_{n-1} $$
Note that we can write $$a_{n - 1} = \sum_{i = 0}^{n - 2} \left(2^{n - i - 2}\cdot \frac{(n - 2)!}{i!} \cdot a_i\right)$$
$$\implies a_n = 2(n-1)\cdot a_{n-1} + a_{n-1} = (2n-1)a_{n-1}$$
Since $a_1 = 1$, we get 
$$a_n = (2n - 1)\cdot (2n - 3) \cdot \ldots \cdot 3 \cdot 1$$ $$\implies \boxed{a_n = (2n - 1)!!}$$

Remarks: The main idea is to realize that you can write $a_n$ in terms of smaller $n$'s, and basically to realize how to do that, which is a bit non-trivial. Then solving the recurrence is pretty standard if you have enough practice with it.


Problem 4 [USAMO 2002 P1] : Let $S$ be a set with $2002$ elements, and let $N$ be an integer with $0 \leq N \leq 2^{2002}$. Prove that it is possible to color every subset of $S$ either black or white so that the following conditions hold: 
  • the union of any two white subsets is white;
  • the union of any two black subsets is black;
  • there are exactly $N$ white subsets. 
Ideas: Clearly at first glance it's obvious that $2002$ is just a bluff, and it should hopefully work for all $k$ or maybe even $k$, either of which can be true or maybe all $k$ with some specific property.(Here $k = |S|$) Moreover, for simplicity we can assume that if $S$ is a $t$ length subset then $S = \{1, 2, \ldots, t\}$.
The most natural thing now is to try for $k = 1, 2, 3\ldots$. After you try it with enough patience, you realize it's true for all small values of $k$, so you make a guess that it might be true for all $k$! 

Now the next thing you try to see is if there's some sort of "pattern" on how to color because c'mon $2^{2022}$ is like very HUGE. So if you're observant enough, you can notice that we can color the elements inductively. For $k = 1, 2$ it's easy.

Assume that we can color all the subsets for $\mid S \mid = k - 1$, we need to prove we can also color the subsets of $S$ when $\mid S \mid = k$ with the given conditions.
If $N \le 2^{k - 1}$, color the same subsets white which you did for $k - 1$ case. Note that for $k - 1$ case, none of the subsets has element $k$ in it. And hence exactly $N$ white subsets will be there for $N \le 2^{k - 1}$ case satisfying the other $2$ properties (if this doesn't seem obvious to you, think over it by considering some small cases, say $k - 1 = 4$ and try to prove it yourself!)

Now when $N > 2^{k - 1}$ what do we do?? Notice that if there are exactly $N$ white subsets, then there are exactly $2^k - N$ black subsets!
$$2^k - N < 2^k - 2^{k - 1} < 2^{k - 1}$$
This means there are less than $2^{k - 1}$ black subsets. This means if you color all the subsets black which was white in $2^k - N$ white subsets case, you will be done! SO YAYYY we proved the problem for all $\mid S \mid \ge 1$.

Remarks : The main thing was to notice that $2002$ is not important. Moreover realizing that we can do this inductively is a key idea to solve this. I personally like this problem a lot because you don't really need paper to solve this, you can try this problem in your mind when just hovering around and finish it off :P 


Problem 5 [Canada 2018 P3] : Two positive integers $a$ and $b$ are prime-related if either $a/b$ or $b/a$ is prime. Prove that if $n \ne 1$ is a perfect square then it's not possible to arrange all the divisors of $n$ in a circle, so that any two adjacent divisors are prime-related. 

Ideas : Since we are talking about prime thingies and factors of $n$, it's natural to consider $n = p_1^{a_1}p_2^{a_2} \ldots p_k^{a_k}$, where $a_i$ is even because $n$ is a perfect square. Now working directly with $k$ seems a bit hard, so we try to work for smaller values of $k$ first. For $k = 1$ it's trivial, so let's take $k = 2$ first. Now I don't really know how to motivate the next step, but maybe it's just practice and observations.

We basically try to make it look a bit organized. So what we do is, we arrange all the factors of $n$ in a $(a_1 + 1) \times (a_2 + 1)$ size grid, with the top-left element as $1$. And whenever we move to the right, we multiply with $p_1$ and when we move down we multiply by $p_2$. So basically the grid looks something like this:- 
Now, what we try to do is consider a cycle starting from $1$, such that at any given time we can move only $1$ step right/left/up/down, basically, no diagonal moves are permitted. This ensures that consecutive elements are prime-related. And we also want the cycle to end at $1$ and cover all the numbers in the grid exactly once so that we can arrange the numbers in the circle. So if it was possible to do so, then we can find such a cycle, but we have to show it's not possible. Note that from $1$, you can only go to $p_1$ or $p_2$, say you go to $p_1$. Now to end at $1$, the second last element should be $p_2$ only (think why?). 

This next step is a bit hard to think of but is a fairly standard idea. We now do chessboard coloring in this grid, and since $p_1$ and $p_2$ are diagonal, they should be of the same color. This means our path(starting from $p_1$ and ending at $p_2$) should be of an odd length so as to not change the color (parity reasons, as in each move it changes its color). But the number of numbers it passes through other than $1$ is $$(a_1 + 1)(a_2 + 1) - 1$$ Since $a_1, a_2$ are even, the above number is even, which is a contradiction.

Okay so this works! Now we try to generalize this idea for $k$ prime factors too :P Soooo get ready for the nicest idea hehe.....
We consider a $k-$dimensional grid, and basically whenever we move along a particular direction, we multiply by a particular prime number. If it's hard to imagine this way, we can also consider a $k-$length tuple denoting indices of the grid, so for example 
$$(x_1, \ x_2, \ \ldots, x_i, \ldots, x_k) \mapsto (x_1, x_2, \ \ldots, x_{i} + 1, \ldots, x_k)$$
we multiply by $p_i$.
Now we do the same chessboard coloring, so basically if the distance between two points is $\sqrt{2}$, they both are of the same color, and here we want to have a cycle with the same conditions, which is basically a path from one prime factor to another prime factor is needed, wlog say from $p_1$ to $p_2$. And now since $p_1, p_2$ are both of the same color and the path will be of even length, this is not possible. And hence a contradiction.

Which means if $n \ne 1$ is a perfect square then it's not possible to arrange all the divisors of $n$ in a circle, so that any two adjacent divisors are prime-related. 

Remarks : The actual problem is a bit different, I found the other part of problem boring, but this idea to consider a $k-$dimensional chessboard was pretty cool. Moreover, this just comes from basic intuition from the $k = 2$ case. Like at first glance you might try $k$ colors instead of just black/white for $k$ dimensional grid and that might not work, then you try some other things until you realize the $2$ color idea just works.
Moreover, you could also rephrase this thing graphically, but I think that's not as easy as the idea presented above.



Phew! we are done for today yayy, I hope you liked the problems and learned some new ideas from them. 
You can try the following questions as an exercise which are based on similar ideas :-

Exercise 1 [Kazakhstan 2016] : Let $n\geq 2$ be a positive integer. Prove that all divisors of $n$ can be written as a sequence $d_1,\dots,d_k$ such that for any $1\leq i<k$ one of $\frac{d_i}{d_{i+1}}$
and $\frac{d_{i+1}}{d_i}$ is a prime number.

Exercise 2 [USAMO 2019 P4] :Let $n$ be a nonnegative integer. Determine the number of ways to choose sets $S_{ij} \subseteq \{1, 2, \dots, 2n\}$, for all $0 \le i \le n$ and $0 \le j \le n$ (not necessarily distinct), such that 
  • $|S_{ij}| = i+j$, and     
  • \item $S_{ij} \subseteq S_{kl}$ if $0 \le i \le k \le n$ and 
  • $0 \le j \le l \le n$.

I will bring part $2$ of it soon where we will work on $5$ more problems hehe. Do tell in the comments which problem you liked the most, and if the overall difficulty was fair enough or if I should modify (increase/decrease) it a bit in part 2.

Byeeeeeee :DDD
Pranav

Comments

  1. Very nice blog, Pranav. I really enjoyed while solving them and I'm looking forward for part 2. But I think it maybe better if the difficulty level of problems are increased in part 2.

    ReplyDelete

Post a Comment

Popular posts from this blog

The importance of "intuition" in geometry

Hii everyone! Today I will be discussing a few geometry problems in which once you "guess" or "claim" the important things, then the problem can easily be finished using not-so-fancy techniques (e.g. angle chasing, power-of-point etc. Sometimes you would want to use inversion or projective geometry but once you have figured out that some particular synthetic property should hold, the finish shouldn't be that non trivial) This post stresses more about intuition rather than being rigorous. When I did these problems myself, I used freehand diagrams (not geogebra or ruler/compass) because I feel that gives a lot more freedom to you. By freedom, I mean, the power to guess. To elaborate on this - Suppose you drew a perfect  diagram on paper using ruler and compass, then you would be too rigid on what is true in the diagram which you drew. But sometimes that might just be a coincidence. e.g. Let's say a question says $D$ is a random point on segment $BC$, so maybe

LMAO Revenge

Continuing the tradition of past years, our seniors at the Indian IMO camp(an unofficial one happened this year) once again conducted LMAO, essentially ELMO but Indian. Sadly, only those who were in the unofficial IMOTC conducted by Pranav, Atul, Sunaina, Gunjan and others could participate in that. We all were super excited for the problems but I ended up not really trying the problems because of school things and stuff yet I solved problem 1 or so did I think. Problem 1:  There is a   grid of real numbers. In a move, you can pick any real number  ,  and any row or column and replace every entry   in it with  .  Is it possible to reach any grid from any other by a finite sequence of such moves? It turned out that I fakesolved and oh my god I was so disgusted, no way this proof could be false and then when I was asked Atul, it turns out that even my answer was wrong and he didn't even read the proof, this made me even more angry and guess what? I was not alone, Krutarth too fakesol

Edge querying in graph theory

In this post, I will present three graph theory problems in increasing difficulty, each with a common theme that one would determine a property of an edge in a complete graph through repeated iterations, and seek to achieve a greater objective. ESPR Summer Program Application: Alice and Bob play the following game on a $K_n$ ($n\ge 3$): initially all edges are uncolored, and each turn, Alice chooses an uncolored edge then Bob chooses to color it red or blue. The game ends when any vertex is adjacent to $n-1$ red edges, or when every edge is colored; Bob wins if and only if both condition holds at that time. Devise a winning strategy for Bob. This is more of a warm-up to the post, since it has a different flavor from the other two problems, and isn't as demanding in terms of experience with combinatorics. However, do note that when this problem was first presented, applicants did not know the winner ahead of time; it would be difficult to believe that Bob can hold such a strong