Finite subgroups of SL(2,Z) are cyclic

Posted: December 8, 2021 in Basic Algebra, Groups, Matrices
Tags: , , ,

Let R be a commutative ring with identity. Here we defined the general linear group \text{GL}(n,R) and here the special linear group \text{SL}(n,R) was defined. In this post, we are interested in \text{SL}(2,\mathbb{Z}), the group of all 2 \times 2 matrices with integer entries and with determinant 1.

Theorem. Every finite subgroup of \text{SL}(2,\mathbb{Z}) is cyclic.

Proof (Y. Sharifi). Define the group homomorphism

\varphi: \text{GL}(2,\mathbb{Z}) \to \text{GL}(2,\mathbb{Z}_3)

by \varphi(A)=\overline{A}, where the entries of \overline{A} are just the entries of A modulo 3. By Minkowsky’s theorem, if G is a finite subgroup of \text{GL}(2,\mathbb{Z}), then G \cong \varphi(G). Let \psi be the restriction of \varphi to \text{SL}(2,\mathbb{Z}). So

\psi: \text{SL}(2,\mathbb{Z}) \to \text{SL}(2,\mathbb{Z}_3)

is defined by \psi(A)=\overline{A} and if G is a finite subgroup of \text{SL}(2,\mathbb{Z}), then G \cong \psi(G). Now, by this post, \text{SL}(2,\mathbb{Z}_3)=\text{SL}(2,3) has only one non-cyclic proper subgroup Q, and Q \cong Q_8, the quaternion group. We also showed in that post that Q is generated by the following elements

\bold{i}=\begin{pmatrix}0 & -1 \\ 1 & 0 \end{pmatrix}, \ \ \ \ \ \bold{j}=\begin{pmatrix}1 & 1 \\ 1 & -1\end{pmatrix}.

So in order to complete the solution, we only need to prove that if G is a finite subgroup of \text{SL}(2,\mathbb{Z}), then \psi(G) \ne Q. Suppose, to the contrary, that \psi(G)=Q for some finite subgroup G of \text{SL}(2,\mathbb{Z}). So there exist A,A' \in \text{SL}(2,\mathbb{Z}) such that \psi(A)=\bold{i}, \ \psi(A')=\bold{j}. Let

A=\begin{pmatrix}a & b \\ c & d \end{pmatrix}, \ \ \ \ \ \ A'=\begin{pmatrix}a' & b' \\ c' & d' \end{pmatrix}.

So we have b,d' \equiv -1 \mod 3, and c,a',b',c' \equiv 1 \mod 3, and hence

b \ne 0, \ \ \ \ b' \ne 0, \ \ \ \ b'c - bc' \ne 0. \ \ \ \ \ \ \ \ (1)

Now, since \bold{i}^2=\bold{j}^2=-I, we have \psi(A^2)=\bold{i}^2=\bold{j}^2=\psi(A'^2) and hence A^2=A'^2, which gives

b(a+d)=b'(a'+d'), \ \ \ \ c(a+d)=c'(a'+d').

Hence (b'c-bc')(a+d)=(b'c-bc')(a'+d')=0 and so, by (1), \ d=-a, \ d'=-a'. Thus

A=\begin{pmatrix}a & b \\ c & -a \end{pmatrix}, \ \ \ \ \ \ A'=\begin{pmatrix}a' & b' \\ c' & -a' \end{pmatrix}.

Since A,A' \in \text{SL}(2,\mathbb{Z}), we have \det(A)= \det(A')=1 and so

a^2+bc=a'^2+b'c'=-1. \ \ \ \ \ \ \ \ \ \ \ (2)

Also, since \bold{i}\bold{j}=-\bold{j}\bold{i}=\bold{j}^{-1}\bold{i}, we have AA'=A'^{-1}A, which gives

2aa'+b'c+bc'=0. \ \ \ \ \ \ \ \ \ \ \ (3)

We now get from (1), (2) that

\displaystyle c=-\frac{1+a^2}{b}, \ \ \ \ \ c'=-\frac{1+a'^2}{b'}.

Plugging the above into (3) gives

b^2+b'^2+(b'a-ba')^2=0,

which is clearly impossible. This contradiction completes the proof. \Box

Leave a Reply