Skip to main content

Linear Algebra in Graph Theory

Hi, I am Ameya Vikrama Singh, a college student with a recreational interest in mathematics. I was involved in Olympiad math in school, and these days pursue it in leisure.

Here we will talk about two interesting applications of Linear Algebra in Graph Theory. It is highly recommended that you have some familiarity with Linear Algebra, such as the definition of linear independence, rank, rank-nullity theorem, determinants, and some idea of eigenvalues. 3b1b's playlist is a great place to start, and you do not require very rigorous insights into these concepts. Some familiarity with the definitions in graph theory will help too. $G(n,e)$ denotes a simple graph $G$ with $n$ vertices and $e$ edges, all graphs in this post are simple, though you may extend the results beyond them. I encourage you to ponder upon the why? checkpoints to follow the post.

Here we present a few definitions:

Fig 1: Sample Graph

Adjacency Matrix $A$: The adjacency matrix associated with a simple graph $G(n,e)$ is an $n \times n$ matrix, such that $a_{ij}=1$ if vertex $i$ and $j$ are connected by an edge and $0$ otherwise. For example, the adjacency matrix of the graph in figure 1 is given by 

$$A = \begin{pmatrix}0 & 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 & 1 \\ 1 & 0 &0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 \\\end{pmatrix}$$

Degree Matrix $D$: The degree matrix is an $n\times n$ diagonal matrix whose diagonal elements are the degrees of the corresponding nodes. For fig 1:

$$D = \begin{pmatrix}3&0&0&0&0\\0&3&0&0&0\\0&0&2&0&0\\0&0&0&4&0\\0&0&0&0&2\end{pmatrix}$$

Directed Incidence Matrix $B$: For every edge of an undirected graph, we assign an arbitrary direction to make it a simple directed graph. The incidence matrix of such a directed graph is an $n \times e$ matrix, where each column corresponds to an edge and contains a $+1$ for the vertex it leaves, and $-1$ for the vertex it enters. For example, (one of) the incidence matrices for fig 1 is:

$$B = \begin{pmatrix}1 & 1 & 1 & 0 & 0 &0 & 0 \\ -1 & 0 & 0 & 0 & -1 & 0&1\\ 0 & 0&-1&1&0&0&0\\0&-1&0&-1&1&-1&0\\0&0&0&0&0&1&-1\end{pmatrix}$$

Laplacian Matrix $L$: The Laplacian matrix is defined as $L = D-A$. Notice that the product $BB^T$ is independent of which orientation is assigned to the undirected graph and that $L = D-A = BB^T$ (why?). If you are familiar with the Laplacian operator ($\nabla^2$) from calculus, you might see how this matrix can be compared to a discrete version of the double derivative. Indeed, for a discrete function $f:V \rightarrow \mathbb{R}$, which can be represented as a $n$-vector, applying the Laplacian Operator quantifies the difference of the value of the function from the mean of its neighbours, and if you are familiar with the Mean-value property, you might start to think what the Laplacian Matrix represents as an operator.

$$L = \begin{pmatrix}3 & -1 & -1 & -1 & 0 \\ -1 & 3 & 0 & -1 & -1 \\ -1 & 0 &2& -1 & 0 \\ -1 & -1 & -1 & 4 & -1 \\ 0 & -1 & 0 & -1 & 2 \\\end{pmatrix}$$

We begin by analysing a few properties of these matrices, pertaining to graph theory. Each row and column of the Laplacian matrix sums to 0. This implies that the Laplacian's determinant is 0, and thus 0 is one of its eigenvalues, with eigenvector $(1,1,1 \cdots)$. Now, there is this cool theorem from Linear Algebra that says $\text{rank}(L) = \text{rank}(B)$, and also this other cool theorem from Linear Algebra, which says the number of linearly independent rows in $B$, is equal to the number of linearly independent columns in $B$. An immediate corollary from this is that $B$ can have at most rank $n-1$(why?). Now consider a certain $n-1$ columns of $B$ that are linearly independent. These represent $n-1$ edges in the graph. If these edges contained a cycle(A path starting and ending at the same vertex), then the columns couldn't be linearly independent(why?). Thus, these $n-1$ edges contain no cycle, which implies they must form a tree(Note that you also need connectedness for a tree, why should they be connected?). This gives us a necessary and sufficient condition for the connectedness of a graph, that $B$, and hence $L$, have the rank $n-1$. 

Try extending this result to the number of components in a disconnected graph.

Electric Circuits and Kirchhoff's Laws

Kirchhoff's Laws, together with Ohm's Law provide a linear system of equations for solving an electric circuit. So, they are indeed much easier to write in a matrix notation. Let us start with a connected graph $G(n,e)$, all of whose edges are $1 \Omega$ resistors. Further, let us assign node voltages $v = \{V_1,V_2, V_3 \cdots V_n \}$ to the $n$ nodes. This eliminates the need for Kirchhoff's Voltage Law. Consider what the product $Lv$ represents. Each of its $n$ elements quantifies the sum of voltage differences between the node and its neighbours. If all edges are assumed $1 \Omega$ resistors, this equals the net current leaving the node. Since current cannot arise out of nowhere, this current must either be provided externally, or this term must be 0. Thus, this term represents the current fed into the corresponding node. We can write this succinctly as $$Lv = i$$

Observe that in continuous settings, Ohm's Law ($\mathbf{j} = \sigma \mathbf{E}$) gives a similar equation $\sigma \nabla^2 V = \nabla \cdot \mathbf{j}$  (Poisson's Equation). Why is the conductance term 'missing'?

Typically, a current of 1A is fed into one node and taken out from another, and this equation can be used to determine the corresponding potential distribution (Recall the rank of $L$, can this equation uniquely determine $v$?).

In the Physics Olympiad circle, there is a problem with electric circuits known as Foster's Theorem. The problem appears as Problem 4.9 in Electricity and Magnetism, EM Purcell. The proof as provided in the text is to consider a clever current superposition which simplifies the summation. Here we provide an alternate proof which shows the power of linear algebra.

The theorem goes as follows: the sum of resistance distance across all the edges of the graph equals $n-1$. Resistance distance across an edge means the effective resistance between the endpoints, assuming all edges of the circuits were $1 \Omega$. For example, for the fig 1, the resistances across endpoints of edges are $\frac{4}{7}, \, \frac{10}{21}, \, \frac{13}{21}, \, \frac{13}{21}, \,\frac{10}{21}, \, \frac{13}{21}, \, \frac{13}{21}$, and their sum is $4 =  5-1$.

 Ohm's Law tells us that this resistance is equal to the potential difference between the endpoints when a current of 1 A is put into one endpoint and taken out of the other. To evaluate this potential difference, we must use Kirchhoff's Laws. Consider the current vectors which involve feeding 1 A into a node, and taking it out from its neighbour. There are $e$ such vectors, and they are exactly the columns of $B$! For each of these $e$ current vectors, there is a corresponding $v$ vector. We can write simultaneously the $e$ equations, by letting $i$ be a $n \times e$ matrix which is equal to $B$, and $v$ be the corresponding $n \times e$ matrix $V$ whose columns represent the solutions for each case. The equation now becomes 

$$LV = B$$

But $L$ is equal to $BB^T$, and we have $$B B^TV = B$$

You might see where this going, but it's a bit more subtle than cancelling $B$. What is the product $B^T V$? It is an $e \times e$ matrix, and we care particularly about its diagonal terms, which represent the potential difference across the edge for which the current is fed. These diagonal terms are equal to the resistances across endpoints of edges! Thus the sum we are looking for is the sum of diagonal terms of $B^TV$, which is called the trace. The trace of a matrix, is equal to the sum of its eigenvalues(why?). Try and complete the proof from here.

Each of the rows of $B$ is a left eigenvector of $B^T V$, with eigenvalue $1$. There are $n-1$ of them which are linearly independent(why?), so the matrix $B^T V$ has the eigenvalue $1$ at least $n-1$ times. But, since $\text{rank}(AB) \le min\{\text{rank}(A), \text{rank}(B)\}$ , it can have at most rank $n-1$. Thus, $n-1$ eigenvalues are $1$, and the rest must be $0$. The sum of the eigenvalues, and thus the trace is equal to $n-1$. Can you obtain a similar result when the resistances are not all equal? Can you modify the method to prove Foster's Second Theorem?

The discrete Laplacian in electric circuits has much of the same utility as the Laplacian in the continuous case from calculus. Just like the Laplacian operator in calculus is Hermitian, the Laplacian matrix is real and symmetric (a subset of being hermitian). From the spectral theorem, it has an orthonormal eigenbasis in which it is diagonal. By transferring Poisson's Equation to the eigenbasis of the Laplacian, we simply have to solve a Fourier decomposition problem, and this provides a mechanical method to solving finite or infinite circuits. (If you are unfamiliar with this method, I recommend checking out a mathematical physics/quantum mechanics resource, which will typically contain all of the fun and none of the rigour :) ) An illustration of this method to solve a famous problem is seen here: Infinite Grid of Resistors.

Counting Trees

A tree is a connected graph with $n$ vertices and no cycles. It must have $n-1$ edges. A labelled tree is a tree whose nodes are labelled. We will count the number of labelled trees on n vertices, by algebraic methods.

Spanning Tree: A subgraph of a connected graph $G(n,e)$ is a spanning tree if it contains all vertices of $G$ and is a tree. Two spanning trees are the same if they have the same vertices and edges.

Can you draw the spanning trees of the graph in fig 1? There are 21 of them. The number of spanning trees of a graph is denoted $\tau$.

Consider a complete graph on $n$ vertices, denoted $K_n$, where every pair of vertices is adjacent. Let the nodes be labelled from 1 to n. Each spanning tree of this graph has a one-one correspondence to a labelled tree on n vertices. We will count the number of spanning trees of a connected graph.

Recall the condition for $n-1$ edges forming a tree, that their corresponding columns in $B$ are linearly independent. This means that there is some $(n-1) \times (n-1)$ submatrix (obtained by removing a row), whose determinant is non-zero. In fact, you get the same determinant no matter which row is removed (why?). What is the value of this determinant for any of the spanning trees in fig 1? Prove that it is $\pm 1$ for any tree (induction ftw), and 0 otherwise.

Cauchy-Binet Formula

For matrices $A_{m \times n}$ and $B_{n \times m}$, with $n \ge m$ the determinant $\text{det}(AB)$ is given by the Cauchy-Binet summation: 

$$\text{det}(AB) = \sum \text{det}(A_S) \text{det}(B_S)$$

Where the summation is over all choices of $m$ columns of $A$ and the corresponding $m$ rows of $B$. The proof can be taken as an exercise, or can be found here.

The Matrix-Tree Theorem

We apply the Cauchy-Binet formula to the product $BB^T$. Since we know the determinants with one row of $B$ removed, we remove a certain row from $B$, and a certain column(not necessarily the same) from $B^T$. Now we have an $(n-1) \times e$ and an $e \times (n-1)$ matrix, whose $n-1$ determinants are all $\pm 1$, and of the same sign between $B$ and $B^T$(why?).

Thus, the terms in the Cauchy-Binet summation are $1$, if the $n-1$ edges form a tree, and $0$ otherwise. The sum counts the number of spanning trees of the graph, and it is equal to the minor of any element of the Laplacian! So, any minor of the Laplacian is equal to the same value, which is the number of spanning trees of the graph. It can also be shown that this cofactor is equal to 

$$\tau = \dfrac{1}{n} \lambda_2 \lambda_3 \cdots \lambda_n$$

where $\lambda_2 \cdots \lambda_n$ are the non-zero eigenvalues of the Laplacian. (Consider the coefficient of $x$ in the characteristic polynomial of $L$)

$K_n$ and $K_{m,n}$

What is the number of spanning trees of $K_n$? The Laplacian of $K_n$ is given by 

$$\begin{pmatrix} n-1 & -1 & -1 & \cdots \\ -1 & n-1 & -1 & \cdots \\ -1 & -1 & n-1 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}   $$

Clearly, $n$ is an eigenvalue, and since $L - n I$ has nullity $n-1$, $n$ has multiplicity $n-1$, equal to the values of $\lambda_2, \lambda_2 \cdots \lambda_n$. Thus, from the Matrix-Tree theorem, the number of spanning trees is $$\tau = n^{n-2}$$An elegant combinatorial argument for the same formula by means of the Prufer Code can be found here.

How many spanning trees does the Complete Bipartite Graph $K_{m,n}$ have? The Laplacian is

$$\begin{pmatrix} m & \cdots & 0 & -1 & \cdots & -1 \\ \vdots & \ddots & \vdots &\vdots & \ddots & \vdots \\ 0 & \cdots & m & -1 & \cdots & -1\\ -1 & \cdots & -1 & n &\cdots &0 \\ \vdots & \ddots & \vdots &\vdots & \ddots & \vdots \\ -1 & \cdots& -1&0&\cdots&n\end{pmatrix}$$

with $m$ repeated $n$ times and vice-versa. Thus, we get that $m$ is an eigenvalue of multiplicity $n-1$, $n$ is an eigenvalue of multiplicity $m-1$, and from the trace, $m+n$ and $0$ are the remaining eigenvalues. Thus, the number of spanning trees are

$$\tau = n^{m-1} m^{n-1}$$

Can you modify the Prufer Code to obtain an encoding for these bipartite spanning trees?


This article contains multiple results from Linear Algebra and some from Graph Theory as well. It might be a little difficult to follow, and I would highly recommend you go through the rigorous nitty-gritty of every result used and obtained to gain a complete picture. If you have questions feel free to contact me through the OMC Discord (@monster1789#9722). I hope it gave you a glimpse into the utility of Linear Algebra, and most of all, that you enjoyed the post! 
Fun Fact: This formalism, and most of these results were introduced to me in a course called 'Intro to Electrical Engineering'.

Ameya Vikrama Singh


Post a Comment

Popular posts from this blog

The importance of "intuition" in geometry

Hii everyone! Today I will be discussing a few geometry problems in which once you "guess" or "claim" the important things, then the problem can easily be finished using not-so-fancy techniques (e.g. angle chasing, power-of-point etc. Sometimes you would want to use inversion or projective geometry but once you have figured out that some particular synthetic property should hold, the finish shouldn't be that non trivial) This post stresses more about intuition rather than being rigorous. When I did these problems myself, I used freehand diagrams (not geogebra or ruler/compass) because I feel that gives a lot more freedom to you. By freedom, I mean, the power to guess. To elaborate on this - Suppose you drew a perfect  diagram on paper using ruler and compass, then you would be too rigid on what is true in the diagram which you drew. But sometimes that might just be a coincidence. e.g. Let's say a question says $D$ is a random point on segment $BC$, so maybe

LMAO Revenge

Continuing the tradition of past years, our seniors at the Indian IMO camp(an unofficial one happened this year) once again conducted LMAO, essentially ELMO but Indian. Sadly, only those who were in the unofficial IMOTC conducted by Pranav, Atul, Sunaina, Gunjan and others could participate in that. We all were super excited for the problems but I ended up not really trying the problems because of school things and stuff yet I solved problem 1 or so did I think. Problem 1:  There is a   grid of real numbers. In a move, you can pick any real number  ,  and any row or column and replace every entry   in it with  .  Is it possible to reach any grid from any other by a finite sequence of such moves? It turned out that I fakesolved and oh my god I was so disgusted, no way this proof could be false and then when I was asked Atul, it turns out that even my answer was wrong and he didn't even read the proof, this made me even more angry and guess what? I was not alone, Krutarth too fakesol

Edge querying in graph theory

In this post, I will present three graph theory problems in increasing difficulty, each with a common theme that one would determine a property of an edge in a complete graph through repeated iterations, and seek to achieve a greater objective. ESPR Summer Program Application: Alice and Bob play the following game on a $K_n$ ($n\ge 3$): initially all edges are uncolored, and each turn, Alice chooses an uncolored edge then Bob chooses to color it red or blue. The game ends when any vertex is adjacent to $n-1$ red edges, or when every edge is colored; Bob wins if and only if both condition holds at that time. Devise a winning strategy for Bob. This is more of a warm-up to the post, since it has a different flavor from the other two problems, and isn't as demanding in terms of experience with combinatorics. However, do note that when this problem was first presented, applicants did not know the winner ahead of time; it would be difficult to believe that Bob can hold such a strong